Jump to content

Kas Ob.

Members
  • Content Count

    464
  • Joined

  • Last visited

  • Days Won

    8

Kas Ob. last won the day on May 25

Kas Ob. had the most liked content!

Community Reputation

121 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Well, this is easy and hard to explain at the same time, it is very easy as cryptographically process and there is many standardized schemes to ship keys/data in an encrypted way, but harder to explain in plain human language, but .. I will give an example on how this is done , As example take how secure connection are established between browsers and this site, the client (browser) and server negotiated a shared key, by shared key i mean they used specific algorithm to reach the same key without shipping (sending) it, but older algorithms were way more simple as the key wasn't shared but generated on peer side (client and server) and shipped in encrypted form, this was called key encapsulation, meaning client and/or server generated their own keys and sent it to other party, this was and is still secure, how it was secure ? because the client after receiving the server certificate used the server public key to encrypt its own chosen key (aka encapsulate it) and send it, only server or any party has the private key can extract that key. Now, what if we replace client and server by hardware token, and make the private and public key already in the hardware !! this will make shipping private key (or any payload) between tokens secure and only who know the already stored private key (should be no one away from the one who stored them) can extract these keys, and this how token can be copied or duplicated. On side note, most secure and modern post quantum cryptography algorithms are incapable of key sharing and only do key encapsulation !
  2. Kas Ob.

    Switch from JCL to EurekaLog

    I have few : 1) Get familiar with EurekaLog, being having the sources or not, with the source you will have faster and better understanding on how things is done, but in all cases spend few hours with its demos and the documentation, keeping in mind EL comes with default behavior but in no way you are obliged to follow it. 2) Use the real gem in EL, the customization, yes, you can customize every thing from error handling to generating the reports. 3) Don't go after a fast kill by replacing all the existing error handling with EL, in other words don't make a shock to your own source code while generating bugs unintentionally, keep the existing error handling for now and ship/build EL report with it, have a look here https://www.eurekalog.com/help/eurekalog/how_to_add_information_to_bug_report.php you can ship the existing logs and error report as fields or as attachments. 4) i don't like the EL components, not because they are bad, but by design these components are initialized after/while a form is created, this could be too late, use events and other EL API from its SDK, example for what important to browse and use https://www.eurekalog.com/help/eurekalog/index.php?how_to_register_event_handler.php https://www.eurekalog.com/help/eurekalog/index.php?index_eevents_routines.php Lastly, don't let its new API design and functionality overwhelm you, take your time (and this is important) to test each API in a test project, before adding it to your production application, some functionality need understanding and tweaking. ps:I had the chance to fix a project, it was big with its own error handling and reporting same as your case, i introduced EL gradually, combining both error reporting, and removing the existing (old) error handling, only few every few days, providing debug version of the application with EL capture all handled exception, then disable the ones that are solved using my own filter in code (using the events) sometimes the shear amount of handled or silent exception is in millions, due to stuff like integer overflow !, but in the end even those disappeared and the project become exception free, except the intended to be exception by design, and it wasn't worth change the application functionality to remove those. Hope that helps.
  3. Kas Ob.

    Decrypt a string with public key using openssl in VCL app

    It is doable, but you need to understand what/which encryption scheme is used with php side that uses openssl, is it standardized one or not, then replicate that on Delphi application, one thing though the most crucial thing with RSA encryption is padding !, so .. I suggest to go with TMS but before buying have your question to their support Sales/Technical, just make sure to give them full and detailed information unlike your question above, like what php code/library is doing the encryption so someone can identify the scheme/algorithm for you. to get an idea of some of standards look at this page https://www.cryptosys.net/pki/manpki/pki_rsaschemes.html yet any code (like php) could be using its own implementation, hence it need to be custom decryption method and of course this include padding.
  4. Kas Ob.

    ssh tunnel with ssh-pascal

    @dummzeuch Can't test it right now, as it need more code to make it work, but Don't ever use select in that way, in general all looks fine and correct, but without socket errors not the ones with API calls return, but the ones from "exceptfds", without checking for socket level errors you are driving in the dark, if error was signaled on the socket then read will hang for ever ! Add exceptfds, and then check for errors before read or write, as in many cases both error and read/write might be triggered and reported at the same time, in your case most likely this what did happen, an acknowledgment received to close and both readfds and exceptfds had been signaled.
  5. Kas Ob.

    Can't get but 40% CPU usage multi-tasking

    I am chipping in, i don't have CPU with E-Core and P-Core, but i believe i do have a clear picture on them and will try to explain and clear few misconception about CPU core in general, HT (also called SMT) and the new technology E-Core with P-Core. (too long to read ?, just skip the latest few lines) We must start at the beginning, How Multithread is working ? in other words how OS simulate and provide Multitasking ? This is essential to understand, as many know it, the running thread which called CPU (aka CPU processor), must be stopped and switch context then continue from the new point, and how low level interrupt involve is irrelevant here, one can imagine a hardware timer by the CPU itself stop the current running process (from execution processing in virtual mode) and start by going to specific address in protected mode, this make the thread (again i am talking about CPU and running Core thread) a kernel one, this thread itself will run in the context of OS scheduler and will pick a new context then continue, hence will appear as the OS providing multitasking by slicing the running time between multiple contexts, these multiple contexts could be in hundreds or thousands, and these have nothing to do with CPU or any hardware, it is agnostic for the CPU and it is irrelevant, so the limit is OS resources to save contexts. Thread context, is resource(s), from the OS point of view is very small and limited, it is essentially are the current CPU registers (general and others including execution pointer, stack pointer, debug ....etc), these are a must to make the thread continue running multitasking, but the OS also need its own identifiers and resources, but again this one is very small as it is one memory page (4kb) storing the context needed by the hardware ( registers and what not) and the OS specific data like the thread TLS and context recognizing if it is User or Kernel one... that in very simple way and it is enough to understand how the OS with one core CPU ( one running physical processer) can provide multitasking and multithreading. The process or the operation of switching the context is well called Context Switching.. duh ! and it happens in software, in OS kernel scheduler. Now with multi physical cores, the above is the same multi core can and will do the exact same as one core, only the kernel scheduler start to become a little more complex (aware) to adjust and when and where these threads continued to run, example if a heavy process thread running with higher priority and taking its sweet time, the scheduler might refuse to perform a context switch and send the thread running to continue its work, or decide to pick a specific thread was running on the same core instead of making salad and arbitrary switching, there is much to write here, but this is enough for now. Now at some point CPU manufacturer had an idea what if we did the switching of the context instead of the OS kernel, hmmm.... we can simply build our own hardware context and simulate 2 thread while doing nothing at all except storing the hardware context of the running CPU, and thus they created HT (hyperthreading), it doesn't provide anything other than performing hardware context switching on its own, and in theory they can make HT3 version for simulating 3 threads (3 logical core form one physical) or even 128 logical one, but the CPU must have extra space for storing the contexts and that is it, so it is cheating or faking. Now, how much is that HT useful or gaining performance? the answer is it depends, mostly yes it is faster, but it could be slower, see... my Windows 10 saying there is ~1460 running thread at this moment of writing, my CPU is i7-2500k, no HT !, my HT i-2600k burned in March so i replaced it with the mother board with used one at $35 , and i am satisfied with this as the change didn't make dramatic or even noticeable performance change for me, anyways, OS is deciding when and where to switch the context to, as almost all of these ~1500 threads are doing nothing, just waiting on some hardware interrupt, signals in general, software ones. when HT is performing slower? while if there is very intensive execution, the OS could do better than hardware in the context of context switching because it does know to whom give more execution time slice, here the hardware switching is fooling the OS and wasting time, there is many resources on the internet asking and explaining how and when HT is slower or faster. So to short the above HT is a cheat trick by the CPU to emulate another core, very helpful with more of the %90 any modern OS CPU usage. Now to our main subject E-Core and P-Core, intel wanted to break this stall, as we did reach the peak of CPU speed with Sandy Bridge or may be Haswell, they invented and added many tricks to enhance the performance, but they weren't enhancing the performance of the low level instruction execution, all the tricks were about the end of the execution, the result, the visible change, and they did great job mostly in decreasing the power usage and enhancing the output, yet there was the problem with HT and its impact, as HT at anytime can be hindering. So they implemented and presented a new standard (technology) to involve the OS while adding new simplified cores, physical ones, these ones doesn't have the full potential of latest Intel technology and tricks !, these could be the ones that are waiting, of course this need the OS to signal or mark the current context as ...like.... nah.. this one is performing Sleep(t) or this one is blocking for another thread so this is lower processing priority, we are talking about E-Core (efficiency core), it is full CPU with full capability, BUT it doesn't have the full packages of Intel tricks, like out-of-order-execution in its full window length, or lack the out-of-order-execution or full dedicated L1 cache size like P-Core, i don't know for sure what have being decreased or removed to be honest, and quick search didn't help much, but you got the idea, most articles draw them as smaller in physical size like this, notice the size and the E-Core shared LLC (Last Level Cache, known as L1) Now after all of the above comes how to utilize the CPU in full with the existence of P-Core and E-Core, i can't answer that too, as i don't have physical access to one, and answering this is little harder than it seems, like this article is deep and great https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity Few extra thoughts 1) While my late deceased HT CPU was with me something around 13 years, big chunk of its life was running with HT disabled, yes if you are going to optimize and do the right clocking then HT will only corrupt your result. 2) To be honest disable HT didn't impact my PC experience at all. 3) if you ever need to use fibers or you are building application with intensive CPU usage, you really should disable HT, as it will only throw sticks in the wheels of your application or server. 4) While it is useful in general, i mean all the Intel tricks, some of them are expensive, and not worth it at all, just waste of money. 5) The huge impact that every one should pick is power consumption when choosing CPUs, as it will reflect how CPU will throttle, and when, the lower CPU power needs, the faster and longer it will perform the intensive processing, while in obvious way if you don't need the intensive usage then don't buy the most expensive, it is simple logic if you think about it, paying top $ for a feature that may save me few minutes per week, is waste of money, and if you calculate if that saving will be in hours per week, then it is valid argument. 5) In general, wouldn't expect E-Core to be a lot of useful in doing work , as they designed and implemented as second degree performer, and should be dedicated to waiting for input, as many of the resources i read they really under performing when it comes to memory operations, aka performing as they were cores from decade or more ago, so utilizing them at %100 could be a mistake as all what they will contribute is more heat by consuming more power, affecting other cores and causing throttling. 6) About throttling, while yes throttling is decreasing the power and the clock, it doesn't mean the newer CPU will not shut down the E-Core, in theory it will be easier and more efficient to shut down a core, but hey, there can't be a shut down, it will be running the core at lowest clock possible with the lowest power possible, this might be what is going on, and this is my personal opinion, modern CPU should start to throttle cores in interleave way, in hope to dissipate the heat in half step instead of lower them all, but based on .. again my opinion, but not the least it will be easier and logical for the efficiency to throttle E-Core before P-Core. Lastly, for the subject of this thread and the Jud question, you need to monitor your CPU in lowest level you can, share with us a screen shot of Process monitor while application under full load where you expect it to utilize %100, only then we can say something, look at task manager when report the overall of all the cores (logical and physical) is uninformative !, and wrong way to find the short coming, to be honest i didn't see a screen shot of these mixed core as reported by monitoring tools, like Process Monitor or even CPU-Z... Hope you find this informative and helpful.
  6. Kas Ob.

    Code signing in a remotely working team?

    Yes, i got that and in fact this is visible when you verify, you can get the full of hash table like this But my question is Why and How is this useful ? while i found how to build it manually and also verify it, my question is how is this a useful thing, what is the point to hash each page alone when signature is only depend on the the file hash in full, meaning if a page has different hash means that page is tempered with, but this also will change the file hash and render the signature invalid. i asked if someone has saw it in the wild being requested or needed, my assumption is this feature could be one of futuristic feature that being dropped, or it might be still active and hidden (undocumented) by Microsoft, to validate page integrity for already loaded binary, in other words to make it easier and faster for lets say Windows Defender to validate specific pages instead of keeping calculating the whole memory layout of the loaded file, also while not all pages on the file would be in the memory loaded, but this could give the ability to check protected pages for loaded binary if the protection being changed, hence can be way faster to detect tempering while remove the need to pre-calculate these hashes at the loading moment, but yet again DEP should prevent this, unless it is a feature to allow to patch memory pages in safely method (API), yet not documented. Anyway... that too much ranting.
  7. To make it clearer, your result is 5283539281 in decimal which is 13AEC6951 in hex, meaning it did not wrapped (overflow) and performed exactly what GetTickCount64 should/have returned.
  8. Brilliant, thank you very very much ! This test is very hard to wait for and make, and you did it, really appreciated your sharing.
  9. @DelphiUdIT On side note: (after seeing you mention "mov ax, source" and "mov dest, ax") If you are going to open a ticket then report the hidden performance killer, it strikes randomly, due the simplicity of the compiler and its usage of specific 16bit operation instruction. from "3.4.2.3 Length-Changing Prefixes (LCP) " https://www.intel.com/content/www/us/en/content-details/671488/intel-64-and-ia-32-architectures-optimization-reference-manual-volume-1.html Most Delphi string operation does utilize 16bit loading the storing, the above LCP effect does occur when a these instruction aligned (or positioned at specific address 13h or 14h, repeating and reproducing it hard as it might be stay hidden until some changes in complete part of the project triggered a small shift, also very important thing, these instruction are generated by the Delphi compiler without any consideration for such alignment, so what happen in real life and when you benchmark that it keeps shifting up and down and differentiate from version to another from build to build, what we witness is close to steady result, but in fact the bottlenecks where shifted form place to another, fixing this will ensure better and more consistent performance in string handling in any Delphi binary produced. I believe this was the case not a long ago when i suggested to replace 16bit field fill with 32bit and let it overflow, the suggestion was outlandish unless that instruction hit specific address and then cause 6 or 11 cycle of stall ( full pipe stop), and it looks it was the case as Vencent observed.
  10. If it could be done better it would had. ( not sure if that is correct English phrase) Anyway, you need to look to the hardware implementation as its basic building blocks, so you can't invent an instruction or algorithm easily, you either need huge hardware circuits which will cost more delay and other problem, or utilize already exist uops ! so here a more detailed look at MOVSQ https://uops.info/html-lat/AMT/MOVSQ-Measurements.html It does show that 17 or 18 micro ops used for one MOVSQ, and this is huge latency.
  11. The compiler is generating very bad instruction, even calling RTL Move would be faster then using MOVSQ !! https://uops.info/html-instr/MOVSQ.html 4 cycles in theory but in reality it is 7-8 cycles on my CPU ! Suggestion find these and manually move/copy the fields, specially in tight loops.
  12. Kas Ob.

    tag as String

    I think the best way is to have a singleton, that initialized very early in the project (preferably right after MM and before forms), this singleton will hook TComponent.Create and Destroy, grabbing its address (value/pointer) and add it to internally managed list, this list have the component associated with an extra data, extra data could be a string, integer, record.. and even classes. This will solve/remove the need to manually managing this extra data (eg. string), also ensure memory integrity for this extra data without interfering with the associated component in anyway, on top of that helper(s) for specific components can be used for easier and simplified use.
  13. Kas Ob.

    HTTP/1.1 400 Bad Request

    From the latest RFC https://www.rfc-editor.org/rfc/rfc8259#section-2 Which wasn't the case in the older RFC(s), and even now it is still not defined concretely, protocols that digitally sign payloads require no spaces like ACME, as it should parse it before and still generate the same exact after parsing to match the hash.
  14. Kas Ob.

    HTTP/1.1 400 Bad Request

    In addition to Remy points, i want to focus on : that your JSON is not standard, and many parsers might see it as broken, JSON should be in compact form and doesn't/shouldn't have spaces.
  15. If only it was this simple. and NO, that will not work, you are miss understanding how encryption work, and don't be offended, this is not for everyone. First you have to read a little how encryption modes work, https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation Second, you must understand and never forget one essential element in encryption, the internal state (the context) does advance with every encryption process performed, meaning .... well this will get a little complicated here... 1) If the Data length (in whole) less than one block, this the output is similar to CTR with IV=0, CTR doesn't have IV (to be exact) but have counter|nonce works as IV, but as it is 0 for the first block then it does work for this case with DevArt broken implementation. 2) If the Data length multiple is longer than 8 bytes, then for all full blocks (n*8) then it is pure CBC, and it is correct. 3) here comes the rest if there is more fewer bytes than 8 meaning [1..7] bytes when the length of Data (the original length ) was more than 8, in this case there is nothing out of the box can help you or able to decrypt it, Why? Because the internal state of the this last block is already unknown and depends on the last one, and this can't be fixed by using key or IV !!, from the wikipedia link you see these boxes indicating encryption, but the state is changed and can't be injected or altered, this is way more complicated then follow the wrong doing. As i said, you need an implementation and it need to be corrupted to be compatible, just to make sure you do understand what i am saying, it is CTR for less than 8 bytes CBC for all the 8*n, if there is remaining bytes then it is more like CFB mode, and in all cases the state is one !, meaning you can't switch or reinitialize cryptor/decryptor, you need the state to be chained as usual. You also didn't answer my question, about is it essential to be decrypted using different language/package/library, as you can use DevArt Encryptor to decrypt anything encrypted with it, it is wrong but consistent. Also look at this clean and simple implementation in C# (no extensive OOP or forced padding), https://gist.github.com/vbe0201/af16e522562b2122953206d8bbd1eb50#file-alefcrypto-cs-L368 at that line i marked, comes the adjustment to handle full blocks or xor after encryption for less than full block and this in theory should solve your problem, in case you want to go with this way, then build project and test cases for multiple values with multiple length and come back if you can't do it, and i will try to help. An you know what CTR that you found returning same result for less than 8 bytes is identical to CFB with IV=0, so it is CBC for all 8 bytes then the rest is handled as CFB and to be accurate CFB8 (as 8 bits) happen on bit level in case you faced this discriminations searching the web.
×