Kas Ob.
Members-
Content Count
461 -
Joined
-
Last visited
-
Days Won
8
Kas Ob. last won the day on May 25
Kas Ob. had the most liked content!
Community Reputation
121 ExcellentRecent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
-
@dummzeuch Can't test it right now, as it need more code to make it work, but Don't ever use select in that way, in general all looks fine and correct, but without socket errors not the ones with API calls return, but the ones from "exceptfds", without checking for socket level errors you are driving in the dark, if error was signaled on the socket then read will hang for ever ! Add exceptfds, and then check for errors before read or write, as in many cases both error and read/write might be triggered and reported at the same time, in your case most likely this what did happen, an acknowledgment received to close and both readfds and exceptfds had been signaled.
-
I am chipping in, i don't have CPU with E-Core and P-Core, but i believe i do have a clear picture on them and will try to explain and clear few misconception about CPU core in general, HT (also called SMT) and the new technology E-Core with P-Core. (too long to read ?, just skip the latest few lines) We must start at the beginning, How Multithread is working ? in other words how OS simulate and provide Multitasking ? This is essential to understand, as many know it, the running thread which called CPU (aka CPU processor), must be stopped and switch context then continue from the new point, and how low level interrupt involve is irrelevant here, one can imagine a hardware timer by the CPU itself stop the current running process (from execution processing in virtual mode) and start by going to specific address in protected mode, this make the thread (again i am talking about CPU and running Core thread) a kernel one, this thread itself will run in the context of OS scheduler and will pick a new context then continue, hence will appear as the OS providing multitasking by slicing the running time between multiple contexts, these multiple contexts could be in hundreds or thousands, and these have nothing to do with CPU or any hardware, it is agnostic for the CPU and it is irrelevant, so the limit is OS resources to save contexts. Thread context, is resource(s), from the OS point of view is very small and limited, it is essentially are the current CPU registers (general and others including execution pointer, stack pointer, debug ....etc), these are a must to make the thread continue running multitasking, but the OS also need its own identifiers and resources, but again this one is very small as it is one memory page (4kb) storing the context needed by the hardware ( registers and what not) and the OS specific data like the thread TLS and context recognizing if it is User or Kernel one... that in very simple way and it is enough to understand how the OS with one core CPU ( one running physical processer) can provide multitasking and multithreading. The process or the operation of switching the context is well called Context Switching.. duh ! and it happens in software, in OS kernel scheduler. Now with multi physical cores, the above is the same multi core can and will do the exact same as one core, only the kernel scheduler start to become a little more complex (aware) to adjust and when and where these threads continued to run, example if a heavy process thread running with higher priority and taking its sweet time, the scheduler might refuse to perform a context switch and send the thread running to continue its work, or decide to pick a specific thread was running on the same core instead of making salad and arbitrary switching, there is much to write here, but this is enough for now. Now at some point CPU manufacturer had an idea what if we did the switching of the context instead of the OS kernel, hmmm.... we can simply build our own hardware context and simulate 2 thread while doing nothing at all except storing the hardware context of the running CPU, and thus they created HT (hyperthreading), it doesn't provide anything other than performing hardware context switching on its own, and in theory they can make HT3 version for simulating 3 threads (3 logical core form one physical) or even 128 logical one, but the CPU must have extra space for storing the contexts and that is it, so it is cheating or faking. Now, how much is that HT useful or gaining performance? the answer is it depends, mostly yes it is faster, but it could be slower, see... my Windows 10 saying there is ~1460 running thread at this moment of writing, my CPU is i7-2500k, no HT !, my HT i-2600k burned in March so i replaced it with the mother board with used one at $35 , and i am satisfied with this as the change didn't make dramatic or even noticeable performance change for me, anyways, OS is deciding when and where to switch the context to, as almost all of these ~1500 threads are doing nothing, just waiting on some hardware interrupt, signals in general, software ones. when HT is performing slower? while if there is very intensive execution, the OS could do better than hardware in the context of context switching because it does know to whom give more execution time slice, here the hardware switching is fooling the OS and wasting time, there is many resources on the internet asking and explaining how and when HT is slower or faster. So to short the above HT is a cheat trick by the CPU to emulate another core, very helpful with more of the %90 any modern OS CPU usage. Now to our main subject E-Core and P-Core, intel wanted to break this stall, as we did reach the peak of CPU speed with Sandy Bridge or may be Haswell, they invented and added many tricks to enhance the performance, but they weren't enhancing the performance of the low level instruction execution, all the tricks were about the end of the execution, the result, the visible change, and they did great job mostly in decreasing the power usage and enhancing the output, yet there was the problem with HT and its impact, as HT at anytime can be hindering. So they implemented and presented a new standard (technology) to involve the OS while adding new simplified cores, physical ones, these ones doesn't have the full potential of latest Intel technology and tricks !, these could be the ones that are waiting, of course this need the OS to signal or mark the current context as ...like.... nah.. this one is performing Sleep(t) or this one is blocking for another thread so this is lower processing priority, we are talking about E-Core (efficiency core), it is full CPU with full capability, BUT it doesn't have the full packages of Intel tricks, like out-of-order-execution in its full window length, or lack the out-of-order-execution or full dedicated L1 cache size like P-Core, i don't know for sure what have being decreased or removed to be honest, and quick search didn't help much, but you got the idea, most articles draw them as smaller in physical size like this, notice the size and the E-Core shared LLC (Last Level Cache, known as L1) Now after all of the above comes how to utilize the CPU in full with the existence of P-Core and E-Core, i can't answer that too, as i don't have physical access to one, and answering this is little harder than it seems, like this article is deep and great https://www.anandtech.com/show/17047/the-intel-12th-gen-core-i912900k-review-hybrid-performance-brings-hybrid-complexity Few extra thoughts 1) While my late deceased HT CPU was with me something around 13 years, big chunk of its life was running with HT disabled, yes if you are going to optimize and do the right clocking then HT will only corrupt your result. 2) To be honest disable HT didn't impact my PC experience at all. 3) if you ever need to use fibers or you are building application with intensive CPU usage, you really should disable HT, as it will only throw sticks in the wheels of your application or server. 4) While it is useful in general, i mean all the Intel tricks, some of them are expensive, and not worth it at all, just waste of money. 5) The huge impact that every one should pick is power consumption when choosing CPUs, as it will reflect how CPU will throttle, and when, the lower CPU power needs, the faster and longer it will perform the intensive processing, while in obvious way if you don't need the intensive usage then don't buy the most expensive, it is simple logic if you think about it, paying top $ for a feature that may save me few minutes per week, is waste of money, and if you calculate if that saving will be in hours per week, then it is valid argument. 5) In general, wouldn't expect E-Core to be a lot of useful in doing work , as they designed and implemented as second degree performer, and should be dedicated to waiting for input, as many of the resources i read they really under performing when it comes to memory operations, aka performing as they were cores from decade or more ago, so utilizing them at %100 could be a mistake as all what they will contribute is more heat by consuming more power, affecting other cores and causing throttling. 6) About throttling, while yes throttling is decreasing the power and the clock, it doesn't mean the newer CPU will not shut down the E-Core, in theory it will be easier and more efficient to shut down a core, but hey, there can't be a shut down, it will be running the core at lowest clock possible with the lowest power possible, this might be what is going on, and this is my personal opinion, modern CPU should start to throttle cores in interleave way, in hope to dissipate the heat in half step instead of lower them all, but based on .. again my opinion, but not the least it will be easier and logical for the efficiency to throttle E-Core before P-Core. Lastly, for the subject of this thread and the Jud question, you need to monitor your CPU in lowest level you can, share with us a screen shot of Process monitor while application under full load where you expect it to utilize %100, only then we can say something, look at task manager when report the overall of all the cores (logical and physical) is uninformative !, and wrong way to find the short coming, to be honest i didn't see a screen shot of these mixed core as reported by monitoring tools, like Process Monitor or even CPU-Z... Hope you find this informative and helpful.
-
Yes, i got that and in fact this is visible when you verify, you can get the full of hash table like this But my question is Why and How is this useful ? while i found how to build it manually and also verify it, my question is how is this a useful thing, what is the point to hash each page alone when signature is only depend on the the file hash in full, meaning if a page has different hash means that page is tempered with, but this also will change the file hash and render the signature invalid. i asked if someone has saw it in the wild being requested or needed, my assumption is this feature could be one of futuristic feature that being dropped, or it might be still active and hidden (undocumented) by Microsoft, to validate page integrity for already loaded binary, in other words to make it easier and faster for lets say Windows Defender to validate specific pages instead of keeping calculating the whole memory layout of the loaded file, also while not all pages on the file would be in the memory loaded, but this could give the ability to check protected pages for loaded binary if the protection being changed, hence can be way faster to detect tempering while remove the need to pre-calculate these hashes at the loading moment, but yet again DEP should prevent this, unless it is a feature to allow to patch memory pages in safely method (API), yet not documented. Anyway... that too much ranting.
-
Help needed in testing emulated GetTickCount32/64
Kas Ob. replied to Kas Ob.'s topic in Algorithms, Data Structures and Class Design
To make it clearer, your result is 5283539281 in decimal which is 13AEC6951 in hex, meaning it did not wrapped (overflow) and performed exactly what GetTickCount64 should/have returned. -
Help needed in testing emulated GetTickCount32/64
Kas Ob. replied to Kas Ob.'s topic in Algorithms, Data Structures and Class Design
Brilliant, thank you very very much ! This test is very hard to wait for and make, and you did it, really appreciated your sharing. -
Execution time difference between 32b and 64b, using static arrays
Kas Ob. replied to lg17's topic in General Help
@DelphiUdIT On side note: (after seeing you mention "mov ax, source" and "mov dest, ax") If you are going to open a ticket then report the hidden performance killer, it strikes randomly, due the simplicity of the compiler and its usage of specific 16bit operation instruction. from "3.4.2.3 Length-Changing Prefixes (LCP) " https://www.intel.com/content/www/us/en/content-details/671488/intel-64-and-ia-32-architectures-optimization-reference-manual-volume-1.html Most Delphi string operation does utilize 16bit loading the storing, the above LCP effect does occur when a these instruction aligned (or positioned at specific address 13h or 14h, repeating and reproducing it hard as it might be stay hidden until some changes in complete part of the project triggered a small shift, also very important thing, these instruction are generated by the Delphi compiler without any consideration for such alignment, so what happen in real life and when you benchmark that it keeps shifting up and down and differentiate from version to another from build to build, what we witness is close to steady result, but in fact the bottlenecks where shifted form place to another, fixing this will ensure better and more consistent performance in string handling in any Delphi binary produced. I believe this was the case not a long ago when i suggested to replace 16bit field fill with 32bit and let it overflow, the suggestion was outlandish unless that instruction hit specific address and then cause 6 or 11 cycle of stall ( full pipe stop), and it looks it was the case as Vencent observed. -
Execution time difference between 32b and 64b, using static arrays
Kas Ob. replied to lg17's topic in General Help
If it could be done better it would had. ( not sure if that is correct English phrase) Anyway, you need to look to the hardware implementation as its basic building blocks, so you can't invent an instruction or algorithm easily, you either need huge hardware circuits which will cost more delay and other problem, or utilize already exist uops ! so here a more detailed look at MOVSQ https://uops.info/html-lat/AMT/MOVSQ-Measurements.html It does show that 17 or 18 micro ops used for one MOVSQ, and this is huge latency. -
Execution time difference between 32b and 64b, using static arrays
Kas Ob. replied to lg17's topic in General Help
The compiler is generating very bad instruction, even calling RTL Move would be faster then using MOVSQ !! https://uops.info/html-instr/MOVSQ.html 4 cycles in theory but in reality it is 7-8 cycles on my CPU ! Suggestion find these and manually move/copy the fields, specially in tight loops. -
I think the best way is to have a singleton, that initialized very early in the project (preferably right after MM and before forms), this singleton will hook TComponent.Create and Destroy, grabbing its address (value/pointer) and add it to internally managed list, this list have the component associated with an extra data, extra data could be a string, integer, record.. and even classes. This will solve/remove the need to manually managing this extra data (eg. string), also ensure memory integrity for this extra data without interfering with the associated component in anyway, on top of that helper(s) for specific components can be used for easier and simplified use.
-
From the latest RFC https://www.rfc-editor.org/rfc/rfc8259#section-2 Which wasn't the case in the older RFC(s), and even now it is still not defined concretely, protocols that digitally sign payloads require no spaces like ACME, as it should parse it before and still generate the same exact after parsing to match the hash.
-
In addition to Remy points, i want to focus on : that your JSON is not standard, and many parsers might see it as broken, JSON should be in compact form and doesn't/shouldn't have spaces.
-
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
If only it was this simple. and NO, that will not work, you are miss understanding how encryption work, and don't be offended, this is not for everyone. First you have to read a little how encryption modes work, https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation Second, you must understand and never forget one essential element in encryption, the internal state (the context) does advance with every encryption process performed, meaning .... well this will get a little complicated here... 1) If the Data length (in whole) less than one block, this the output is similar to CTR with IV=0, CTR doesn't have IV (to be exact) but have counter|nonce works as IV, but as it is 0 for the first block then it does work for this case with DevArt broken implementation. 2) If the Data length multiple is longer than 8 bytes, then for all full blocks (n*8) then it is pure CBC, and it is correct. 3) here comes the rest if there is more fewer bytes than 8 meaning [1..7] bytes when the length of Data (the original length ) was more than 8, in this case there is nothing out of the box can help you or able to decrypt it, Why? Because the internal state of the this last block is already unknown and depends on the last one, and this can't be fixed by using key or IV !!, from the wikipedia link you see these boxes indicating encryption, but the state is changed and can't be injected or altered, this is way more complicated then follow the wrong doing. As i said, you need an implementation and it need to be corrupted to be compatible, just to make sure you do understand what i am saying, it is CTR for less than 8 bytes CBC for all the 8*n, if there is remaining bytes then it is more like CFB mode, and in all cases the state is one !, meaning you can't switch or reinitialize cryptor/decryptor, you need the state to be chained as usual. You also didn't answer my question, about is it essential to be decrypted using different language/package/library, as you can use DevArt Encryptor to decrypt anything encrypted with it, it is wrong but consistent. Also look at this clean and simple implementation in C# (no extensive OOP or forced padding), https://gist.github.com/vbe0201/af16e522562b2122953206d8bbd1eb50#file-alefcrypto-cs-L368 at that line i marked, comes the adjustment to handle full blocks or xor after encryption for less than full block and this in theory should solve your problem, in case you want to go with this way, then build project and test cases for multiple values with multiple length and come back if you can't do it, and i will try to help. An you know what CTR that you found returning same result for less than 8 bytes is identical to CFB with IV=0, so it is CBC for all 8 bytes then the rest is handled as CFB and to be accurate CFB8 (as 8 bits) happen on bit level in case you faced this discriminations searching the web. -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
@dormky I do understand the problem now in all needed depth, i do understand the wrong implementation in TCREncryptor, and it is when DataHeader is set to ehNone, i can solve this and also you can. The buggy implementation is as follow : 1) CBC mode is implemented right, but fall short when the length of the data is not multiple length of blocks of 8 bytes. 2) for all the blocks with 8 it is right CBC, aka xor then encrypt. 3) when there is extra it is doing something wrong, it does encrypt then xor, and that why you are getting identical values with CTR mode when the data is shorter than 1 block, as CTR is not initialized and hold 0. Now, to your need: Up you mentioned Python and in your last post you are using C#, if this is not problem then why not use DevArt broken implementation to decrypt, notice that no matter what are you going to use, you need broken and altered implementation. So you have the ball, decide what and tell me and i will try to help. -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
Tomorrow will try to figure something for you, no promise though, after almost two hours digging into this, i can say the encryption is utterly broken and non standard CBC, it is wrong ! -
What is the algorithm used to derive the key to a TMyEncryptor ?
Kas Ob. replied to dormky's topic in Databases
I don't have any experience with LockBox3, and the fact there is memory leak(s) is enough reason to stay away form it, Anyway.. 41 is Ord('A'), in other word it looks it didn't encrypt anything, to confirm try 'B', if you got 42 then either the library is broken or you are using it wrong !