Kas Ob.
Members-
Content Count
464 -
Joined
-
Last visited
-
Days Won
8
Everything posted by Kas Ob.
-
Code signing certificates have become so expensive...
Kas Ob. replied to RaelB's topic in Delphi Third-Party
Well, this is easy and hard to explain at the same time, it is very easy as cryptographically process and there is many standardized schemes to ship keys/data in an encrypted way, but harder to explain in plain human language, but .. I will give an example on how this is done , As example take how secure connection are established between browsers and this site, the client (browser) and server negotiated a shared key, by shared key i mean they used specific algorithm to reach the same key without shipping (sending) it, but older algorithms were way more simple as the key wasn't shared but generated on peer side (client and server) and shipped in encrypted form, this was called key encapsulation, meaning client and/or server generated their own keys and sent it to other party, this was and is still secure, how it was secure ? because the client after receiving the server certificate used the server public key to encrypt its own chosen key (aka encapsulate it) and send it, only server or any party has the private key can extract that key. Now, what if we replace client and server by hardware token, and make the private and public key already in the hardware !! this will make shipping private key (or any payload) between tokens secure and only who know the already stored private key (should be no one away from the one who stored them) can extract these keys, and this how token can be copied or duplicated. On side note, most secure and modern post quantum cryptography algorithms are incapable of key sharing and only do key encapsulation ! -
I have few : 1) Get familiar with EurekaLog, being having the sources or not, with the source you will have faster and better understanding on how things is done, but in all cases spend few hours with its demos and the documentation, keeping in mind EL comes with default behavior but in no way you are obliged to follow it. 2) Use the real gem in EL, the customization, yes, you can customize every thing from error handling to generating the reports. 3) Don't go after a fast kill by replacing all the existing error handling with EL, in other words don't make a shock to your own source code while generating bugs unintentionally, keep the existing error handling for now and ship/build EL report with it, have a look here https://www.eurekalog.com/help/eurekalog/how_to_add_information_to_bug_report.php you can ship the existing logs and error report as fields or as attachments. 4) i don't like the EL components, not because they are bad, but by design these components are initialized after/while a form is created, this could be too late, use events and other EL API from its SDK, example for what important to browse and use https://www.eurekalog.com/help/eurekalog/index.php?how_to_register_event_handler.php https://www.eurekalog.com/help/eurekalog/index.php?index_eevents_routines.php Lastly, don't let its new API design and functionality overwhelm you, take your time (and this is important) to test each API in a test project, before adding it to your production application, some functionality need understanding and tweaking. ps:I had the chance to fix a project, it was big with its own error handling and reporting same as your case, i introduced EL gradually, combining both error reporting, and removing the existing (old) error handling, only few every few days, providing debug version of the application with EL capture all handled exception, then disable the ones that are solved using my own filter in code (using the events) sometimes the shear amount of handled or silent exception is in millions, due to stuff like integer overflow !, but in the end even those disappeared and the project become exception free, except the intended to be exception by design, and it wasn't worth change the application functionality to remove those. Hope that helps.
-
Decrypt a string with public key using openssl in VCL app
Kas Ob. replied to saeiddavoody's topic in VCL
It is doable, but you need to understand what/which encryption scheme is used with php side that uses openssl, is it standardized one or not, then replicate that on Delphi application, one thing though the most crucial thing with RSA encryption is padding !, so .. I suggest to go with TMS but before buying have your question to their support Sales/Technical, just make sure to give them full and detailed information unlike your question above, like what php code/library is doing the encryption so someone can identify the scheme/algorithm for you. to get an idea of some of standards look at this page https://www.cryptosys.net/pki/manpki/pki_rsaschemes.html yet any code (like php) could be using its own implementation, hence it need to be custom decryption method and of course this include padding. -
@dummzeuch Can't test it right now, as it need more code to make it work, but Don't ever use select in that way, in general all looks fine and correct, but without socket errors not the ones with API calls return, but the ones from "exceptfds", without checking for socket level errors you are driving in the dark, if error was signaled on the socket then read will hang for ever ! Add exceptfds, and then check for errors before read or write, as in many cases both error and read/write might be triggered and reported at the same time, in your case most likely this what did happen, an acknowledgment received to close and both readfds and exceptfds had been signaled.
-
I am chipping in, i don't have CPU with E-Core and P-Core, but i believe i do have a clear picture on them and will try to explain and clear few misconception about CPU core in general, HT (also called SMT) and the new technology E-Core with P-Core. (too long to read ?, just skip the latest few lines) We must start at the beginning, How Multithread is working ? in other words how OS simulate and provide Multitasking ? This is essential to understand, as many know it, the running thread which called CPU (aka CPU processor), must be stopped and switch context then continue from the new point, and how low level interrupt involve is irrelevant here, one can imagine a hardware timer by the CPU itself stop the current running process (from execution processing in virtual mode) and start by going to specific address in protected mode, this make the thread (again i am talking about CPU and running Core thread) a kernel one, this thread itself will run in the context of OS scheduler and will pick a new context then continue, hence will appear as the OS providing multitasking by slicing the running time between multiple contexts, these multiple contexts could be in hundreds or thousands, and these have nothing to do with CPU or any hardware, it is agnostic for the CPU and it is irrelevant, so the limit is OS resources to save contexts. Thread context, is resource(s), from the OS point of view is very small and limited, it is essentially are the current CPU registers (general and others including execution pointer, stack pointer, debug ....etc), these are a must to make the thread continue running multitasking, but the OS also need its own identifiers and resources, but again this one is very small as it is one memory page (4kb) storing the context needed by the hardware ( registers and what not) and the OS specific data like the thread TLS and context recognizing if it is User or Kernel one... that in very simple way and it is enough to understand how the OS with one core CPU ( one running physical processer) can provide multitasking and multithreading. The process or the operation of switching the context is well called Context Switching.. duh ! and it happens in software, in OS kernel scheduler. Now with multi physical cores, the above is the same multi core can and will do the exact same as one core, only the kernel scheduler start to become a little more complex (aware) to adjust and when and where these threads continued to run, example if a heavy process thread running with higher priority and taking its sweet time, the scheduler might refuse to perform a context switch and send the thread running to continue its work, or decide to pick a specific thread was running on the same core instead of making salad and arbitrary switching, there is much to write here, but this is enough for now. Now at some point CPU manufacturer had an idea what if we did the switching of the context instead of the OS kernel, hmmm.... we can simply build our own hardware context and simulate 2 thread while doing nothing at all except storing the hardware context of the running CPU, and thus they created HT (hyperthreading), it doesn't provide anything other than performing hardware context switching on its own, and in theory they can make HT3 version for simulating 3 threads (3 logical core form one physical) or even 128 logical one, but the CPU must have extra space for storing the contexts and that is it, so it is cheating or faking. Now, how much is that HT useful or gaining performance? the answer is it depends, mostly yes it is faster, but it could be slower, see... my Windows 10 saying there is ~1460 running thread at this moment of writing, my CPU is i7-2500k, no HT !, my HT i-2600k burned in March so i replaced it with the mother board with used one at $35 , and i am satisfied with this as the change didn't make dramatic or even noticeable performance change for me, anyways, OS is deciding when and where to switch the context to, as almost all of these ~1500 threads are doing nothing, just waiting on some hardware interrupt, signals in general, software ones. when HT is performing slower? while if there is very intensive execution, the OS could do better than hardware in the context of context switching because it does know to whom give more execution time slice, here the hardware switching is fooling the OS and wasting time, there is many resources on the internet asking and explaining how and when HT is slower or faster. So to short the above HT is a cheat trick by the CPU to emulate another core, very helpful with more of the %90 any modern OS CPU usage. Now to our main subject E-Core and P-Core, intel wanted to break this stall, as we did reach the peak of CPU speed with Sandy Bridge or may be Haswell, they invented and added many tricks to enhance the performance, but they weren't enhancing the performance of the low level instruction execution, all the tricks were about the end of the execution, the result, the visible change, and they did great job mostly in decreasing the power usage and enhancing the output, yet there was the problem with HT and its impact, as HT at anytime can be hindering. So they implemented and presented a new standard (technology) to involve the OS while adding new simplified cores, physical ones, these ones doesn't have the full potential of latest Intel technology and tricks !, these could be the ones that are waiting, of course this need the OS to signal or mark the current context as ...like.... nah.. this one is performing Sleep(t) or this one is blocking for another thread so this is lower processing priority, we are talking about E-Core (efficiency core), it is full CPU with full capability, BUT it doesn't have the full packages of Intel tricks, like out-of-order-execution in its full window length, or lack the out-of-order-execution or full dedicated L1 cache size like P-Core, i don't know for sure what have being decreased or removed to be honest, and quick search didn't help much, but you got the idea, most articles draw them as smaller in physical size like this, notice the size and the E-Core shared LLC (Last Level Cache, known as L1) Now after all of the above comes how to utilize the CPU in full with the existence of P-Core and E-Core, i can't answer that too, as i don't have physical access to one, and answering this is little harder than it seems, like this article is deep and great https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity Few extra thoughts 1) While my late deceased HT CPU was with me something around 13 years, big chunk of its life was running with HT disabled, yes if you are going to optimize and do the right clocking then HT will only corrupt your result. 2) To be honest disable HT didn't impact my PC experience at all. 3) if you ever need to use fibers or you are building application with intensive CPU usage, you really should disable HT, as it will only throw sticks in the wheels of your application or server. 4) While it is useful in general, i mean all the Intel tricks, some of them are expensive, and not worth it at all, just waste of money. 5) The huge impact that every one should pick is power consumption when choosing CPUs, as it will reflect how CPU will throttle, and when, the lower CPU power needs, the faster and longer it will perform the intensive processing, while in obvious way if you don't need the intensive usage then don't buy the most expensive, it is simple logic if you think about it, paying top $ for a feature that may save me few minutes per week, is waste of money, and if you calculate if that saving will be in hours per week, then it is valid argument. 5) In general, wouldn't expect E-Core to be a lot of useful in doing work , as they designed and implemented as second degree performer, and should be dedicated to waiting for input, as many of the resources i read they really under performing when it comes to memory operations, aka performing as they were cores from decade or more ago, so utilizing them at %100 could be a mistake as all what they will contribute is more heat by consuming more power, affecting other cores and causing throttling. 6) About throttling, while yes throttling is decreasing the power and the clock, it doesn't mean the newer CPU will not shut down the E-Core, in theory it will be easier and more efficient to shut down a core, but hey, there can't be a shut down, it will be running the core at lowest clock possible with the lowest power possible, this might be what is going on, and this is my personal opinion, modern CPU should start to throttle cores in interleave way, in hope to dissipate the heat in half step instead of lower them all, but based on .. again my opinion, but not the least it will be easier and logical for the efficiency to throttle E-Core before P-Core. Lastly, for the subject of this thread and the Jud question, you need to monitor your CPU in lowest level you can, share with us a screen shot of Process monitor while application under full load where you expect it to utilize %100, only then we can say something, look at task manager when report the overall of all the cores (logical and physical) is uninformative !, and wrong way to find the short coming, to be honest i didn't see a screen shot of these mixed core as reported by monitoring tools, like Process Monitor or even CPU-Z... Hope you find this informative and helpful.
-
Yes, i got that and in fact this is visible when you verify, you can get the full of hash table like this But my question is Why and How is this useful ? while i found how to build it manually and also verify it, my question is how is this a useful thing, what is the point to hash each page alone when signature is only depend on the the file hash in full, meaning if a page has different hash means that page is tempered with, but this also will change the file hash and render the signature invalid. i asked if someone has saw it in the wild being requested or needed, my assumption is this feature could be one of futuristic feature that being dropped, or it might be still active and hidden (undocumented) by Microsoft, to validate page integrity for already loaded binary, in other words to make it easier and faster for lets say Windows Defender to validate specific pages instead of keeping calculating the whole memory layout of the loaded file, also while not all pages on the file would be in the memory loaded, but this could give the ability to check protected pages for loaded binary if the protection being changed, hence can be way faster to detect tempering while remove the need to pre-calculate these hashes at the loading moment, but yet again DEP should prevent this, unless it is a feature to allow to patch memory pages in safely method (API), yet not documented. Anyway... that too much ranting.
-
Help needed in testing emulated GetTickCount32/64
Kas Ob. posted a topic in Algorithms, Data Structures and Class Design
Hi, This version of GetTickCount works in both flavors (32Bit and 64Bit) on all 64Bit Windows versions, the result is identical to both APIs, also the 32Bit version of it will work on all 32bit Windows versions, the problem is i don't have access to to 32Bit Windows that is running for more than 49.7 days to test the 64Bit result version, And here comes the question for the help, if anyone have access to such OS, like Windows XP or Server 2003 that is running for that long time, it will be great help to test the result from the 64Bit result form this emulated API, and confirm it does return true value with value above the 32Bit. function GetTickCount64Emu: UInt64; const KUSER_BASE_ADDRESS = $7FFE0000; //KUSER_BASE_ADDRESS_KERNEL_MODE =$FFFFF78000000000; //in case it is needed begin Result := (PUInt64(KUSER_BASE_ADDRESS + $320)^ * PCardinal(KUSER_BASE_ADDRESS + $4)^) shr 24; end; Rename as you wish, and adjust as you see fit, like make the result Cardinal to replace GetTickCount or keep it as UInt64, it doesn't matter. and keep in mind it does exactly what GetTickCount and GetTickCount64 do, as this snippet i decompiled form the OS, also this address is fixed across Windows versions, and belongs to a readonly page protected, also in the past, well, far past Microsoft documentation for DDK, was suggesting using the same algorithm for games engines to ditch calling API for timing, to squeeze few extra cycles, so even if this approach is not documented now, it was and will stay accessible. The operation of multiplication and shifting is as it seems a simple division, but with fixed point, hence the bit shifting, per Microsoft documentation the Windows OS kernel is prohibited from using float point operation, and only fixed point are allowed, those has the point at 24th bit. If someone has insight about this, then please share, and if someone can confirm a result over 32bit on an old or new 32Bit system that was and is running for over 49.7 days, then it will be great to share the result. -
Help needed in testing emulated GetTickCount32/64
Kas Ob. replied to Kas Ob.'s topic in Algorithms, Data Structures and Class Design
To make it clearer, your result is 5283539281 in decimal which is 13AEC6951 in hex, meaning it did not wrapped (overflow) and performed exactly what GetTickCount64 should/have returned. -
Help needed in testing emulated GetTickCount32/64
Kas Ob. replied to Kas Ob.'s topic in Algorithms, Data Structures and Class Design
Brilliant, thank you very very much ! This test is very hard to wait for and make, and you did it, really appreciated your sharing. -
Execution time difference between 32b and 64b, using static arrays
Kas Ob. replied to lg17's topic in General Help
@DelphiUdIT On side note: (after seeing you mention "mov ax, source" and "mov dest, ax") If you are going to open a ticket then report the hidden performance killer, it strikes randomly, due the simplicity of the compiler and its usage of specific 16bit operation instruction. from "3.4.2.3 Length-Changing Prefixes (LCP) " https://www.intel.com/content/www/us/en/content-details/671488/intel-64-and-ia-32-architectures-optimization-reference-manual-volume-1.html Most Delphi string operation does utilize 16bit loading the storing, the above LCP effect does occur when a these instruction aligned (or positioned at specific address 13h or 14h, repeating and reproducing it hard as it might be stay hidden until some changes in complete part of the project triggered a small shift, also very important thing, these instruction are generated by the Delphi compiler without any consideration for such alignment, so what happen in real life and when you benchmark that it keeps shifting up and down and differentiate from version to another from build to build, what we witness is close to steady result, but in fact the bottlenecks where shifted form place to another, fixing this will ensure better and more consistent performance in string handling in any Delphi binary produced. I believe this was the case not a long ago when i suggested to replace 16bit field fill with 32bit and let it overflow, the suggestion was outlandish unless that instruction hit specific address and then cause 6 or 11 cycle of stall ( full pipe stop), and it looks it was the case as Vencent observed. -
Execution time difference between 32b and 64b, using static arrays
Kas Ob. replied to lg17's topic in General Help
If it could be done better it would had. ( not sure if that is correct English phrase) Anyway, you need to look to the hardware implementation as its basic building blocks, so you can't invent an instruction or algorithm easily, you either need huge hardware circuits which will cost more delay and other problem, or utilize already exist uops ! so here a more detailed look at MOVSQ https://uops.info/html-lat/AMT/MOVSQ-Measurements.html It does show that 17 or 18 micro ops used for one MOVSQ, and this is huge latency. -
Execution time difference between 32b and 64b, using static arrays
Kas Ob. replied to lg17's topic in General Help
The compiler is generating very bad instruction, even calling RTL Move would be faster then using MOVSQ !! https://uops.info/html-instr/MOVSQ.html 4 cycles in theory but in reality it is 7-8 cycles on my CPU ! Suggestion find these and manually move/copy the fields, specially in tight loops. -
I think the best way is to have a singleton, that initialized very early in the project (preferably right after MM and before forms), this singleton will hook TComponent.Create and Destroy, grabbing its address (value/pointer) and add it to internally managed list, this list have the component associated with an extra data, extra data could be a string, integer, record.. and even classes. This will solve/remove the need to manually managing this extra data (eg. string), also ensure memory integrity for this extra data without interfering with the associated component in anyway, on top of that helper(s) for specific components can be used for easier and simplified use.
-
From the latest RFC https://www.rfc-editor.org/rfc/rfc8259#section-2 Which wasn't the case in the older RFC(s), and even now it is still not defined concretely, protocols that digitally sign payloads require no spaces like ACME, as it should parse it before and still generate the same exact after parsing to match the hash.
-
In addition to Remy points, i want to focus on : that your JSON is not standard, and many parsers might see it as broken, JSON should be in compact form and doesn't/shouldn't have spaces.
-
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
If only it was this simple. and NO, that will not work, you are miss understanding how encryption work, and don't be offended, this is not for everyone. First you have to read a little how encryption modes work, https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation Second, you must understand and never forget one essential element in encryption, the internal state (the context) does advance with every encryption process performed, meaning .... well this will get a little complicated here... 1) If the Data length (in whole) less than one block, this the output is similar to CTR with IV=0, CTR doesn't have IV (to be exact) but have counter|nonce works as IV, but as it is 0 for the first block then it does work for this case with DevArt broken implementation. 2) If the Data length multiple is longer than 8 bytes, then for all full blocks (n*8) then it is pure CBC, and it is correct. 3) here comes the rest if there is more fewer bytes than 8 meaning [1..7] bytes when the length of Data (the original length ) was more than 8, in this case there is nothing out of the box can help you or able to decrypt it, Why? Because the internal state of the this last block is already unknown and depends on the last one, and this can't be fixed by using key or IV !!, from the wikipedia link you see these boxes indicating encryption, but the state is changed and can't be injected or altered, this is way more complicated then follow the wrong doing. As i said, you need an implementation and it need to be corrupted to be compatible, just to make sure you do understand what i am saying, it is CTR for less than 8 bytes CBC for all the 8*n, if there is remaining bytes then it is more like CFB mode, and in all cases the state is one !, meaning you can't switch or reinitialize cryptor/decryptor, you need the state to be chained as usual. You also didn't answer my question, about is it essential to be decrypted using different language/package/library, as you can use DevArt Encryptor to decrypt anything encrypted with it, it is wrong but consistent. Also look at this clean and simple implementation in C# (no extensive OOP or forced padding), https://gist.github.com/vbe0201/af16e522562b2122953206d8bbd1eb50#file-alefcrypto-cs-L368 at that line i marked, comes the adjustment to handle full blocks or xor after encryption for less than full block and this in theory should solve your problem, in case you want to go with this way, then build project and test cases for multiple values with multiple length and come back if you can't do it, and i will try to help. An you know what CTR that you found returning same result for less than 8 bytes is identical to CFB with IV=0, so it is CBC for all 8 bytes then the rest is handled as CFB and to be accurate CFB8 (as 8 bits) happen on bit level in case you faced this discriminations searching the web. -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
@dormky I do understand the problem now in all needed depth, i do understand the wrong implementation in TCREncryptor, and it is when DataHeader is set to ehNone, i can solve this and also you can. The buggy implementation is as follow : 1) CBC mode is implemented right, but fall short when the length of the data is not multiple length of blocks of 8 bytes. 2) for all the blocks with 8 it is right CBC, aka xor then encrypt. 3) when there is extra it is doing something wrong, it does encrypt then xor, and that why you are getting identical values with CTR mode when the data is shorter than 1 block, as CTR is not initialized and hold 0. Now, to your need: Up you mentioned Python and in your last post you are using C#, if this is not problem then why not use DevArt broken implementation to decrypt, notice that no matter what are you going to use, you need broken and altered implementation. So you have the ball, decide what and tell me and i will try to help. -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Tomorrow will try to figure something for you, no promise though, after almost two hours digging into this, i can say the encryption is utterly broken and non standard CBC, it is wrong ! -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
I don't have any experience with LockBox3, and the fact there is memory leak(s) is enough reason to stay away form it, Anyway.. 41 is Ord('A'), in other word it looks it didn't encrypt anything, to confirm try 'B', if you got 42 then either the library is broken or you are using it wrong ! -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Just to fix your problem and workarounds The hash of the password is generated from the WideString of encoded password so const PASSWORD = 'pass'; var keyBytes, PassBytes: TBytes; HashSHA1: THashAlgorithm; HashMD5: THashAlgorithm; begin PassBytes := BytesOf(@WideString(PASSWORD)[1], Length(PASSWORD) * SizeOf(WideChar)); HashSHA1 := THash_SHA1.Create; try HashSHA1.Initialize; HashSHA1.ComputeHash(PassBytes); HashMD5 := THash_MD5.Create; try HashMD5.Initialize; HashMD5.ComputeHash(PassBytes); keyBytes := HashSHA1.Hash + HashMD5.Hash; SetLength(keyBytes, 32); // doing it here is better, triming here 32 for BlowFish or we can later use specify length at 32 with SetKey finally HashMD5.Free; end; finally HashSHA1.Free; end; .... table2.Encryption.Encryptor.SetKey(keyBytes, 0, Length(keyBytes)); // length should be 32 but we don't want to overflow 1) Any hash library will do the same you are free to use your own 2) The length is critical so make sure you are feeding 32 bytes, (and no more ! as strangely enough it does affect the output, meaning the SetKey is not protected from overflowing) 3) There is two version of SetKey, one is bugged and wrong the other does work fine, use the one with the offset and require TBytes as parameter, the one with 3 parameters. -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Also SetKey is buggy !!! and not only ! procedure TCREncryptor.SetKey(const Key; Count: Integer); begin SetKey(TBytes(@Key), 0, Count); // TBytes(@Key) instead of TBytes(Key) end; procedure TCREncryptor.SetKey(const Key: TBytes; Offset, Count: Integer); -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Have a good weekend too ! and now i do understand your points, also it took me long time as i don't want to mess with packages installation, as MyDAC and UniDAC can't be installed together unless the DAC version is identical, anyway... 1) The i described building the key is working fine, i checked and it is fine. here a simple way to build the key and use it using only UniDAC classes var keyBytes: TBytes; HashSHA1 : THashAlgorithm; HashMD5 : THashAlgorithm; begin HashSHA1 := THash_SHA1.Create; try HashSHA1.Initialize; HashSHA1.ComputeHash(BytesOf('pass')); HashMD5 := THash_MD5.Create; try HashMD5.Initialize; HashMD5.ComputeHash(BytesOf('pass')); keyBytes := HashSHA1.Hash + HashMD5.Hash; finally HashMD5.Free; end; finally HashSHA1.Free; end; 2) The encoding and decoding using CBC without padding is fine as i described it, here your 'A' with ehTag and ehTagAndHash in order C544E5292C9C42A5B94FE279127029010796125FFF7BB9A7408E02812A51D5F81FC2053401075D2A C544E5292C9C42A5B94FE27912702901DF4BEED6024BC5FE214A4205B1EE9981DAF4C7C56DCD4CE23D88E2EE9568BA546C007C63D9131C1B while with none it is two bytes E2 Now what is going on, that is strange thing to encrypt without IV, in fact you can't and shouldn't not do that , that is wrong on so many levels !!!!! The IV is zeroed in this case, and this is what you are missing, simple like that. Anyway E2 is HEX you need to de-hex it first then perform decryption in CBC mode, may be i missed mentioning this, but hey.. i don't know if you don't know that encryption with DB should be either binary in blob or HEX in plain text fields. -------------- Additional information changing the code a little to insert two 'A' rows, to show the information leak with ehNone we have two rows have the same encrypted value two chars (value = 'E2'), leaking critical information, exposing the data is identical with ehTag we have C544E5292C9C42A5B94FE27912702901CA9469BBB8503499CDFA76E7E84DBE9BCB48BF853F5A2643 C544E5292C9C42A5B94FE279127029010F89F0C11CB311D592EA28B171C712B5C7725B0130002CE8 and with ehTagAndHash C544E5292C9C42A5B94FE2791270290152F2428A526B0345481FAB80E42E687379477B776DCD4CE23D88E2EE9568BA546C007C63D9131C1B C544E5292C9C42A5B94FE2791270290173F9E56EB70F416E1C5B5CAD820E2220584574896DCD4CE23D88E2EE9568BA546C007C63D9131C1B Now do you see the information leak ?!, yes the hash is the same with ehTagAndHash exposing that the encrypted data was the same !! So until DevArt fix this, everyone should only use "ehTag" -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
I am lost here, what metadata ? What "A" and where did you get it, are trying to encrypt ? Tried what ? there is no hardcoded IV. IV is randomly generated as i said and included in the bytes/blob/strings/widestrings/memo... encrypted data -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Magic number is fixed 4 bytes value implemented in TCREncryptor as header its value "BAADF00D" in HEX, Sorry, for my writing way, well i am trying my best to explain, the problem i can't know where your knowledge limit about encryption to expand, so i write in a way i see serve the most. You have problem understanding symmetric encryption and its modes, you need to read more, and i will expand here what can help you. Yes all symmetric algorithms including BlowFish need specific block size, but it doesn't affect the output, the output is affected by the mode of the encryption, ...... i am not going to torture you with more text, Pad the god damn thing with anything to make it work ! is that better, but don't use any specific padding on decryption because it will fail ! , if your python library enforced to use padding then look for different library that allow no padding, and by padding i mean the algorithm defined in RFCs, not the block padding to ensure the block length, the encrypted data should be already multiple of 8. again the structure of the encrypted data, these concatenated KEY_GUID + IV +CIPHER [+HASH] KEY_GUID is fixed constant of 16 bytes declared internally declared and shipped unencrypted, it has no weight or value on encryption, starting with $C5 .. ending with $01, simply an identifier that the encryption done by DevArt library. IV well .. 8 bytes HASH depends on the Hash algorithm SHA1 or MD5 (property in the encryptor compenent) and if "ehTag" then it will not be included while "ehTagAndHash" means it is there CIPHER is INTERNAL_HEADER + PLAIN_DATA INTERNAL_HEADER is 8 bytes long, contains magic number, aka constant of 4 bytes with "$BAADF00D" in HEX , followed by 4 bytes contain the PLAIN_TEXT length To generate the key, calculate SHA1 and MD5 of the password then concat them, then use the first bytes to fit the key length you need ! simple as that IV you have it in above after KEY_GUID Don't use BlowFish implementation that require padding ! Hope that helps. -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Padding has nothing to do with symmetric encryption itself, padding is standardized way to prevent data lose, many libraries force you to use padding out of best practice and to comply with Modern Security standards, in other words to prevent you from shooting your foot, but there is also libraries (if not all of them) that allow you to manually handle this. And you said you are using , in Delphi ? in Python ? DevArt library ? TCREncryptor doesn't use padding PKCS7 or any other, it does ship the length (4 bytes length) after magic number before the encrypted data and after the random IV. Now i do understand that, and i am trying to help by explaining, your lack of understanding padding and when to use it, will stop you, TCREncryptor doesn't use padding as you assume. No joy there, because no padding, if you insist then pad it with 0 or random, with CBC it will not make a difference, as you will already have the length as i explained, right before the data. Now to your problem, Are you trying to decode the data in full ? As i explained the encrypted data are structured and you only need to decrypt part of it, if you see the lines i pasted above The encrypted data start at KEY_GUID_LEN + IV_LEN , 16+8 bytes, at 24, the length is ... till the end in case of "ehTag" or leaving the length of the hash to the according algorithm when "ehTagAndHash", in my UniDAC version there is only two hash algorithms MD5 and SHA1. The encrypted data include INTERNAL_HEADER_LEN bytes which is 8 bytes, this is the magic number with the length (removing the need for padding), so after decryption you need to check for the magic number and get the length then trim the decrypted data accoring to the length.