Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 12/10/23 in all areas

  1. Stefan Glienke

    random between a range

    RandomRange - just saying
  2. Today, almost all computer security relies on asymmetric cryptography and X.509 certificates as file or hardware modules. And the RSA algorithm is still used to sign the vast majority of those certificates. Even if there are better options (like ECC-256), RSA-2048 seems the actual standard, at least still allowed for a few years. So we added pure pascal RSA cryptography and X.509 certificates support in mORMot 2. Last but not least, we also added Hardware Security Modules support via the PKCS#11 standard. Until now, we were mostly relying on OpenSSL, but a native embedded solution would be smaller in code size, better for reducing dependencies, and easier to work with (especially for HSM). The main idea is to offer only safe algorithms and methods, so that you can write reliable software, even if you are no cryptographic expert. 😉 More information in our blog article about this almost unique features set in Delphi (and FPC): https://blog.synopse.info/?post/2023/12/09/Native-X.509-and-RSA-Support
  3. aehimself

    Gitlab-ci & MSBUILD & Library path

    As an alternate, the path where this file is attempted to be loaed from can be changed in Program Files\Embarcadero\Studio\xx\bin\Codegear.Common.Targets inside the PropertyGroup tag. This is what we ended up using on Azure DevOps at work: Be careful, though. Changing these to a static path will result all users on the same machine to use the very same environment.proj and/or envoptions.proj. This might be, or might not be wanted.
  4. Darian Miller

    Gitlab-ci & MSBUILD & Library path

    Ah - "Expected configuration file missing - C:\WINDOWS\system32\config\systemprofile\AppData\Roaming\Embarcadero\BDS\22.0\EnvOptions.proj" See my blog post for setting up Jenkins, it has the workaround: https://ideasawakened.com/post/getting-started-with-ci-cd-using-delphi-and-jenkins-on-windows "Copy the EnvOptions.proj file to the APPDATA folder of the user account which will execute the builds. (For example: C:\Users\JenkinsUserName\Roaming\Embarcadero\BDS\21.0) If you have custom paths for libraries and component packages, edit the DelphiLibraryPath for each target platform that you will use to match your build machine paths. You will get a warning message in your builds if this file is not found..."
  5. Kas Ob.

    ANN: Native X.509, RSA and HSM Support for mORMot

    Also thank you for this link, it is nice reading.
  6. Chris Pim

    Delphi 11.3 issue with iOS Today Extension widgets

    The only solution we came up with for this was: 1. Copy the fully signed extension from XCode into the Delphi project (in our case deployed to the PlugIns folder). 2. Do a full build and deploy with Delphi (which damages the appex by re-signing with the wrong entitlements file) 3. Manually run a script which copies the original binary of the appex over the top of the one Delphi re-signs in the .app generated in the scratch-dir so it’s correct again. 4. The script then also has to re-run the iosinstall command to deploy the fixed app to the device. To help with step 3, I have the original binary from inside the .appex included in the deployment twice. Once into the PlugIns folder as expected and a second to ../ which puts a copy into the scratch-dir folder. By doing this, my script for stage 3 knows where to get the original file from and it’s always the pre-broken version. This works fine but has the manual step at the end which is a pain. Delphi doesn’t have a post-deploy stage in the project build options (like it does for post-build) which is a shame as the “copy-back” step could just be included there if it did. With any luck, EMB will undo the “fix” that broke this in 11.3 so it isn’t needed anymore. Maybe we should all vote for the issue to push it up their priority list.
  7. Arnaud Bouchez

    ANN: Native X.509, RSA and HSM Support for mORMot

    @Kas Ob. Thanks a lot for the very interesting feedback. Some remarks with no order: As written in the blog article, we tried to follow FIPS specs, used Mbed TLS source as reference, and also the HAC document. Perhaps not at 100%, but at least for the most known cornercases. RSA keys generation follows all those patterns, so should have very proven pattern to generate primes, and we fixed 65537 as exponent. Even the random source is the OS (which is likely to be trusted), then XORed with RdRand() on Intel/AMD, then XORed with our own CSPRNG (which is AES based, from an extensive set of entropy sources). XORing a random source with other sources is a common and safe practice to ensure there is no weakness or predictability. We therefore tried to avoid weaknesses like https://ieeexplore.ieee.org/document/9014350 - see the TBigInt.FillPrime method. The "certificate cache" is a cache from the raw DER binary: the same ICryptCert instance will be reused for the very same DER/PEM input. Sounds safe. In the code you screenshot, there is a x.Compare() call which actually ensure that the supplied certificate DER/PEM binary match the one we have in the trusted list as SKI. If the application is using our high-level ICryptCert abstract interface, only safe levels will be available (e.g. a minimal RSA-2048 with SHA-256, or ECC-256 with SHA-256, and proven AES modes): it was meant to be as less error-prone as possible for the end user. You just can't sign a certificate from a MD5 or SHA1 hash, or encrypt with RC4, DES or AES-ECB. Note that our plan is to implement TLS 1.3 (and only its version 1.3) in the close future, to mitigate even more MIM attacks during the TLS handshake (because all traffic is signed and verified). To summarize, if we use some cache or search within the internal lists, we always ensure that the whole DER/PEM binary do match at 100%, not only some fields. We even don't use fingerprints, but every byte of the content. So attacks from forged certificates with only partial fields should be avoided. Of course, in real live if some company need its application to fulfill some high level of requirements, you may just use OpenSSL or any other library which fits what is needed. With some other potential problems, like loading a wrong or forged external library, or running on a weak POSIX OS... but it is the fun of regulation. 😉 You follow the rules - even if they also are weak. Perhaps we missed something, so your feedback is very welcome. We would like to have our mormot.crypt.*.pas units code audited in the future, by some third party or government agency, in our EEC context, and especially the French regulations. The mORMot 1 SynCrypto and SynEcc were already audited some years ago by a 1B$ company security experts - but it was an internal audit. But the security is even better with mORMot 2. Please continue to look at the source, and if you see anything wrong and dubious, or see any incorrect comment, do not hesitate to come back!
  8. Kas Ob.

    ANN: Native X.509, RSA and HSM Support for mORMot

    @Arnaud Bouchez That is awesome indeed !, really nice work. I have one concern about this ( from https://blog.synopse.info/?post/2023/12/09/Native-X.509-and-RSA-Support ) maintaining a cache of ICryptCert instances, which makes a huge performance benefit in the context of a PKI (e.g. you don't need to parse the X.509 binary, or verify the chain of trust each time). This made me try to follow the implementation and this is not easy, so i want to explain form my past counters with such a possible weak point, and i believe you are the one to check if the implementation is prone to such attack, you have the better understanding and the internal work flow. The code in question is this , (well this is mainly from what caught my eyes, but not limited to that , again it is you who should check and decide) The attack i will (try to) explain in details here is was real vulnerability in Chrome and Windows and even in OpenSSL, and concern caching the validation, this sound silly, yet the attack was at the CA of the certificate not the certificate itself, this will allow MITM to replace the certificate at real time. So, the scenario goes like this: 1) Client initiate the handshake with its ClientHello. 2) MITM will pass it without changing to server. 3) Server respond as usual, and in its response the certificate, here by TLS and SSL reference the suggested behavior is to send at least one CA with the certificate, it is recommended to send full chain excluding the root as best practice, this might not be relevant as all what we need is that the MITM to have the CA, so even if the server didn't send the CA, it might be publicly known (like letsencrypt ...) , anyway MITM will pass this records/packets without touching to the client. 4) Client received the secure untouched traffic and validated and the certificate and its CA (this is the important part), the CA is verified and trusted and most likely cached ! 5) Client proceed to establish the handshake procedure, here MITM cut the traffic and drop the connection, forcing the client to reestablish the connection with new handshake, as the TLS resumption ticket or session were not confirmed, this will not help the client/server connection. 6) MITM pass the traffic for this new ClientHello to the server untouched, or can start its own process impersonating the server, for just recording/watching, in that case it must start a connection on its own to the server and that can easily be passed. 7) MITM as a response to the client, will forge new fake CA, and here the attack part, with this new CA that have identical parts (some only, and it could be very easy indeed) to the real CA, then issue a new certificate signed by this FakeCA, here if the client will validate the certificate for everything and will pass, except for the CA, logically !, but if the cache and its access way and due the short time between the validating the real CA and checking against FakeCA, it might pass, hence the FakeCert will be valid for the client. The attack is really about attacking the CA and abusing the caching mechanism, so, about caching and find the CA in the cache, and how to fake it, if the cache does have a bValid boolean (and may be with a time for the last validation check), this attack is possible, how it is possible? , some implementation find them by FingerPrint , or by public key ( SubjectKey), ( on side note faking public key for EC certificates is was easier than you can imagine, unless a named curved is exclusively used, in other word you can choose the the same curve with specific generator point (Gx and Gy) to make any public you need, hence making the FakeCA public key matching the real CA public key, ... So, the point is : Is caching CA prone for similar attacks ? because it will defeat and override all the checking of the server certificate itself. Hope that was clear, and if there is question, please don't hesitate to ask, i would love to explain, ( side note, that code is not easy to track hence i want to explain and i trust you can find weak point if there is any ) Additional source for similar scenario ECC faking like the mentioned above https://research.kudelskisecurity.com/2020/01/15/cve-2020-0601-the-chainoffools-attack-explained-with-poc/ Can't find more resources searching now !, but i remember similar cases were in WolfSSL and OpenSSL (multiple times), in fact it has long history of such. Anyway, you more than qualified and armed to check such cases, this post is merely food for thought or reminder. Ps: there is many RSA implementation miss rare cases and allow such manipulation, like allowing/processing non primes, or one "1" as exponent, allowing public key to be faked....
  9. Arnaud Bouchez

    FireDAC Alternative

    Try Zeos - they are Open Source, with very good support. And if you need direct DB access, you can use their ZDBC API which bypasses the TDataSet component so is faster, e.g. for a SELECT with a few rows.
  10. Allen@Grijjy

    Delphi 11.3 issue with iOS Today Extension widgets

    Thanks to Chris then! I wonder if we could unbundle the package during the pre-link stage and copy over the original .appex over the top of the Delphi ones after Delphi completes it's codesign step and then rebundle it before it's deployed? I imagine you already tried that though... have to give this some thought.
  11. Dave Nottage

    Delphi 11.3 issue with iOS Today Extension widgets

    The problem was introduced in 11.3 - a "bug" was reported that I strongly suspect was not even a bug, and "fixing" it actually broke the process. You can thank @Chris Pim for it - and it's because of him (and your article, thanks!) I'm finally on the track of making customized notifications on iOS work (though as far as user experience goes, it's pretty sucky compare to Android)
  12. David Heffernan

    random between a range

    value2, value3, value 4 all have an off by one error.
  13. I don't know what to think after reading that article. Here are my comments on it: - the classic way of truncating the last 2 digits with div and mod 10 (or 100) does not involve a costly div or mod instruction on modern compilers (*cough* even Delphi 12 now does it - apart from the bugs that came with it) - I think C++ compilers would detect doing a div and a mod instruction and the code they emit would be further optimized so it does not require the "workaround" that the Delphi RTL uses by calculating the modulo by subtracting the div result times 100 from the original value. - the pseudo-code he shows for detecting the number of digits is correct but this is never what gets executed - and you either rewrite this into a few branches (as you can see in the RTL), a C++ compiler might unroll the loop or some other trickery is applied The DivBy100 function was introduced by me in RSP-36119 and I already notified them that DivBy100 can be removed in 12 because now it properly optimizes a div by 100 - however, that affects performance only by like 0.5% or so. As David correctly pointed out the real bottleneck is the heap allocation - and not only a single one when you just turn an integer into a string and display that one but when you concat strings and numbers the "classic" way because then it produces a ton of small temporary strings. That issue even exists when using TStringBuilder where one might think that this was built for optimization. If you look into some overloads of Append you will see that it naively calls into IntToStr and passes that down to the overload that takes string. This is completely insane as the conversion should be done directly in place into the internal buffer that TStringBuilder already uses instead of creating a temporary string, convert the integer into that one, pass that to Append to copy its content into the buffer. This will likely be my next contribution as part of my "Better RTL" series of JIRA entries.
×