Jump to content
Arnaud Bouchez

ANN: Native X.509, RSA and HSM Support for mORMot

Recommended Posts

Today, almost all computer security relies on asymmetric cryptography and X.509 certificates as file or hardware modules.
And the RSA algorithm is still used to sign the vast majority of those certificates. Even if there are better options (like ECC-256), RSA-2048 seems the actual standard, at least still allowed for a few years.

 

So we added pure pascal RSA cryptography and X.509 certificates support in mORMot 2.
Last but not least, we also added Hardware Security Modules support via the PKCS#11 standard.
Until now, we were mostly relying on OpenSSL, but a native embedded solution would be smaller in code size, better for reducing dependencies, and easier to work with (especially for HSM). The main idea is to offer only safe algorithms and methods, so that you can write reliable software, even if you are no cryptographic expert.  😉

 

More information in our blog article about this almost unique features set in Delphi (and FPC):
https://blog.synopse.info/?post/2023/12/09/Native-X.509-and-RSA-Support

Edited by Arnaud Bouchez
  • Like 5
  • Thanks 7

Share this post


Link to post

@Arnaud Bouchez That is awesome indeed !, really nice work.

 

I have one concern about this ( from https://blog.synopse.info/?post/2023/12/09/Native-X.509-and-RSA-Support )

maintaining a cache of ICryptCert instances, which makes a huge performance benefit in the context of a PKI (e.g. you don't need to parse the X.509 binary, or verify the chain of trust each time).

This made me try to follow the implementation and this is not easy, so i want to explain form my past counters with such a possible weak point, and i believe you are the one to check if the implementation is prone to such attack, you have the better understanding and the internal work flow.

The code in question is this , (well this is mainly from what caught my eyes, but not limited to that , again it is you who should check and decide)

image.thumb.png.4ca65735ee09c6b89d9256fd7cd6990f.png

 

The attack i will (try to) explain in details here is was real vulnerability in Chrome and Windows and even in OpenSSL, and concern caching the validation, this sound silly, yet the attack was at the CA of the certificate not the certificate itself, this will allow MITM to replace the certificate at real time.

 

So, the scenario goes like this:

1) Client initiate the handshake with its ClientHello.

2) MITM will pass it without changing to server.

3) Server respond as usual, and in its response the certificate, here by TLS and SSL reference the suggested behavior is to send at least one CA with the certificate, it is recommended to send full chain excluding the root as best practice, this might not be relevant as all what we need is that the MITM to have the CA, so even if the server didn't send the CA, it might be publicly known (like letsencrypt ...) , anyway MITM will pass this records/packets without touching to the client.

4) Client received the secure untouched traffic and validated and the certificate and its CA (this is the important part), the CA is verified and trusted and most likely cached !

5) Client proceed to establish the handshake procedure, here MITM cut the traffic and drop the connection, forcing the client to reestablish the connection with new handshake, as the TLS resumption ticket or session were not confirmed, this will not help the client/server connection.

6) MITM pass the traffic for this new ClientHello to the server untouched, or can start its own process impersonating the server, for just recording/watching, in that case it must start a connection on its own to the server and that can easily be passed.

7) MITM as a response to the client, will forge new fake CA, and here the attack part, with this new CA that have identical parts (some only, and it could be very easy indeed) to the real CA, then issue a new certificate signed by this FakeCA, here if the client will validate the certificate for everything and will pass, except for the CA, logically !, but if the cache and its access way and due the short time between the validating the real CA and checking against FakeCA, it might pass, hence the FakeCert will be valid for the client.

The attack is really about attacking the CA and abusing the caching mechanism, so, about caching and find the CA in the cache, and how to fake it, if the cache does have a bValid boolean (and may be with a time for the last validation check), this attack is possible, how it is possible? , some implementation find them by FingerPrint , or by public key ( SubjectKey), ( on side note faking public key for EC certificates is was easier than you can imagine, unless a named curved is exclusively used, in other word you can choose the the same curve with specific generator point (Gx and Gy) to make any public you need, hence making the FakeCA public key matching the real CA public key, ... 

 

So, the point is : Is caching CA prone for similar attacks ? because it will defeat and override all the checking of the server certificate itself.

 

Hope that was clear, and if there is question, please don't hesitate to ask, i would love to explain, ( side note, that code is not easy to track hence i want to explain and i trust you can find weak point if there is any ) 

 

Additional source for similar scenario

ECC faking like the mentioned above https://research.kudelskisecurity.com/2020/01/15/cve-2020-0601-the-chainoffools-attack-explained-with-poc/

Can't find more resources searching now !, but i remember similar cases were in WolfSSL and OpenSSL (multiple times), in fact it has long history of such.

 

Anyway, you more than qualified and armed to check such cases, this post is merely food for thought or reminder.

Ps: there is many RSA implementation miss rare cases and allow such manipulation, like allowing/processing non primes, or one "1" as exponent, allowing public key to be faked....

  • Thanks 1

Share this post


Link to post

@Kas Ob.
Thanks a lot for the very interesting feedback.

Some remarks with no order:

  1. As written in the blog article, we tried to follow FIPS specs, used Mbed TLS source as reference, and also the HAC document. Perhaps not at 100%, but at least for the most known cornercases.
  2. RSA keys generation follows all those patterns, so should have very proven pattern to generate primes, and we fixed 65537 as exponent. Even the random source is the OS (which is likely to be trusted), then XORed with RdRand() on Intel/AMD, then XORed with our own CSPRNG (which is AES based, from an extensive set of entropy sources). XORing a random source with other sources is a common and safe practice to ensure there is no weakness or predictability. We therefore tried to avoid weaknesses like https://ieeexplore.ieee.org/document/9014350 - see the TBigInt.FillPrime method.
  3. The "certificate cache" is a cache from the raw DER binary: the same ICryptCert instance will be reused for the very same DER/PEM input. Sounds safe.
  4. In the code you screenshot, there is a x.Compare() call which actually ensure that the supplied certificate DER/PEM binary match the one we have in the trusted list as SKI.
  5. If the application is using our high-level ICryptCert abstract interface, only safe levels will be available (e.g. a minimal RSA-2048 with SHA-256, or ECC-256 with SHA-256, and proven AES modes): it was meant to be as less error-prone as possible for the end user. You just can't sign a certificate from a MD5 or SHA1 hash, or encrypt with RC4, DES or AES-ECB.
  6. Note that our plan is to implement TLS 1.3 (and only its version 1.3) in the close future, to mitigate even more MIM attacks during the TLS handshake (because all traffic is signed and verified).

To summarize, if we use some cache or search within the internal lists, we always ensure that the whole DER/PEM binary do match at 100%, not only some fields. We even don't use fingerprints, but every byte of the content.
So attacks from forged certificates with only partial fields should be avoided.

 

Of course, in real live if some company need its application to fulfill some high level of requirements, you may just use OpenSSL or any other library which fits what is needed.

With some other potential problems, like loading a wrong or forged external library, or running on a weak POSIX OS... but it is the fun of regulation. 😉 You follow the rules - even if they also are weak.
 

Perhaps we missed something, so your feedback is very welcome.
We would like to have our mormot.crypt.*.pas units code audited in the future, by some third party or government agency, in our EEC context, and especially the French regulations.
The mORMot 1 SynCrypto and SynEcc were already audited some years ago by a 1B$ company security experts - but it was an internal audit. But the security is even better with mORMot 2.

Please continue to look at the source, and if you see anything wrong and dubious, or see any incorrect comment, do not hesitate to come back!

  • Like 1

Share this post


Link to post
35 minutes ago, Arnaud Bouchez said:

we always ensure that the whole DER/PEM binary do match at 100%

That is perfect, i did the same with SBB, as it was implemented to compare many things and went all the way to validate the chain, so its caching was paranoid, the thing that confused (a little doubt confusion) me in that binary compare is the declaration of x (TX509) and f(ICryptCert) so that compare (in theory) could be just for the the key or something else, hence i preferred to present this to you instead.

 

One thing that is easy to miss: specified curves and implicit curves are allowed in PKIX and X.509 certificates, but such certificate are not allowed to be used with TLS 1.2 ( 1.3 + ), 

https://datatracker.ietf.org/doc/html/rfc5480#section-2.1.1

That Section 2.1.1 is important and packing few restrictions, you can check (faster than anyone else) for these, since mORMot have full x.509 parser now.

 

Congratulations again !

Share this post


Link to post

Wow! You never cease to amaze us ab!

Does it mean Indy can use it instead of OpenSSL for supporting the latest TLS/SSL standards?

Edited by Edwin Yip

Share this post


Link to post

@Edwin Yip
Not yet, the TLS layer is not yet available.
But I would not use Indy anyway, but the mORMot direct client classes instead, which already allows both OpenSSL and SSPI so you could use the latest TLS standard on the latest Windows revision. 😉

Share this post


Link to post
11 hours ago, Arnaud Bouchez said:

@Edwin Yip
Not yet, the TLS layer is not yet available.
But I would not use Indy anyway, but the mORMot direct client classes instead, which already allows both OpenSSL and SSPI so you could use the latest TLS standard on the latest Windows revision. 😉

"not yet available", does it mean there will be a TLS layer sometime in the future :classic_biggrin:?

For http client it's sensible to just use clients in mORMot, but there are cases where you need to use Indy :) 

Share this post


Link to post
14 hours ago, Arnaud Bouchez said:

@Edwin Yip Yes, we will try to make a TLS 1.3 layer beginning of this new 2024 year. 😉

What a great news for the new year!

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×