Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 05/06/20 in all areas

  1. Pierre le Riche

    FastMM5 now released by Pierre le Riche (small background story)

    Rep movs is only used for large blocks, where it is the fastest mechanism on modern processors. For smaller blocks SSE2 is used, if available. As an interesting aside, apparently the Linux kernel uses rep movs for all moves. The rationale behind that is to force the CPU manufacturers to optimize rep movs even for small moves. Historically rep movs has been hit and miss from one CPU generation to the next. I would like to add more platform support, but there has to be sufficient demand to offset the development and maintenance burden. I don't want v5 to become an "ifdef hell" like, to a large extent, v4. v4 supports so many different configurations that it is littered with ifdefs. It makes it really hard to expand functionality because there are so many different combinations that need to be tested. That said, it is on my "to-do" list to add support for most or all of the platforms supported by the latest Delphi versions. I think it would be particularly useful to have the debugging features available on other platforms as well.
  2. Vandrovnik

    Function with just underscore _() seems to be valid

    With this info in mind, for bad customers you can start programming like that: const _ = 1; _____ = 2; x = _____ + (_-_-_) + 0 + _+0-0+_ ;
  3. Stefan Glienke

    RTTI info for TBytes using runtime packages

    The issue is that TBytes is an alias for TArray<Byte> - generic types being used across different modules have several implications such as that they manifest into each module with their own typeinfo being generated. Try this code for example: uses Rtti, SysUtils; var ctx: TRttiContext; begin Writeln(ctx.GetType(TypeInfo(TBytes)).QualifiedName); end. In a regular console application it will print: System.TArray<System.Byte> While you would assume that it should print: System.SysUtils.TBytes Now if you enable runtime packages for this console application and let it link with rtl you will get an ENonPublicType here. Every module that is using TBytes is not referencing some type that is declared in the RTL runtime package but emiting its own version of TArray<Byte> into its binary. Personally I think this is an issue with the compiler treating the declaration of TBytes = TArray<Byte> different from TBytes = array of Byte in terms of the typeinfo generated - I was sure there is a QP entry somewhere about it but I cannot find it right now. To further provide the proof that aforementioned assumptions of other posters are wrong use the following code with a console application linking with the rtl package: uses Rtti, TypInfo, Classes, SysUtils; var ctx: TRttiContext; begin Writeln(ctx.GetType(TBytesStream).GetField('FBytes').FieldType.Handle = TypeInfo(TBytes)); end. This will print FALSE as the typeinfo of the FBytes field which is of type System.SysUtils.TBytes in fact differs from the typeinfo generated for the usage of TBytes in the console application binary. In fact there does not even exist a type called TBytes but only System.TArray<System.Byte> as TBytes it just an alias as I said and the declaration TBytes = type TArray<Byte> is not legit and raises E2574 Instantiated type can not be used for TYPE'd type declaration
  4. Hi, when you read the title, you probably already had the standard answer in your head: "Does not work with Indy, supports only OpenSSL 1.0.2 at most and thus no TLS 1.3". I can reassure you, that is not my point. Or actually even more precisely: That is exactly my point 😉 I've spent "a little" time writing the Indy support for OpenSSL 1.1.1 and TLS 1.3, and added a push request to Indy: #299. At the same time I have fixed a few issues that have been posted to GitHub (see PR). I wrote 2 new IO handlers ( one for server and one for client), the old ones are unchanged to avoid conflicts. Everything was written and tested in Delphi Berlin 10.1.2 on Win32 and Win64. I have neither macOS nor iOS nor Linux nor Android, nor FreePascal, nor older (or newer) Delphi versions. I have tried to keep older Delphi versions in mind to ensure that it will run on them, but there have been no tests on my part. I have tested it extensively in small test applications with other servers/clients. In addition, I built it into a large Real World program with TCP server/client, SMTP/IMAP/POP clients, FTP client, HTTP client, and it also ran there without problems. Unfortunately the nice man, who has provided new binary files of OpenSSL under indy.fulgan.com has said that he does not offer versions > 1.0.2 anymore. So I used the versions of slWebPro in the beginning (they even still work on WinXP), later I used the versions of Overbyte, because they do not have any external dependencies (and are digitally signed, but no XP anymore^^). But both worked without problems. All files are located in the subfolder "Lib/Protocols/OpenSSL". There are also subfolders "static" and "dynamic" which offers quite extensive imports of the OpenSSL API, once for static linking, once for dynamic loading/unloading. For dynamic loading there are also possibilities in the "IdOpenSSLLoader.pas" to trigger the loading/unloading itself, if you need the API outside of the IO handler (e.g. for your own x509 generation). To save me the double writing of the imports, I wrote a kind of intermediate code in the folder "Intermediate", which I let generate with "GenerateCode" to the two variants. The tool "GenerateCode" is only a simple string customization and actually only designed for Berlin, so I didn't bother about downward compatibility. As a normal user of the IO handlers you don't need them, only if you make changes to the API implementation. So and now comes your. It would be nice if one or the other would test this, so that it is not only WOMM certified, but can withstand more real situations. For me it also works with the Indy, which comes with Delphi Berlin, when I create another unit, which provides some new Indy types and functions. Of course some units have to be adapted locally to use the new unit.
  5. If I understand correctly, FastMM5 handles several arenas instead of one for FastMM4, and tries all of them until one is not currently locked, so thread contention is less subject to happen. One area where FastMM5 may have some improvement is his naive use of "rep movsb" which should rather use a non volative SSE2/AVX move for big blocks. Check https://stackoverflow.com/a/43574756/458259 numbers for instance. ScaleMM2 and FPC heap both use a threadvar arena for small blocks, so doesn't bother to check for any lock. It is truly lock-free. But each thread maintains its own small blocks arena, so it consumes more memory. Other Memory Managers like Intel TBB or JeMalloc have also a similar per-thread approach, but consumes much more memory. For instance, IBB is a huge winner in performance, but it consumes up to 60 (sixty!) times more memory! So it is not usable in practice for serious server work - it may help for heavily multithread apps, but not for common services. I tries those other MM with mORMot and real heavily multi-threaded service. Please check the comments at the beginning of https://synopse.info/fossil/artifact/f85c957ff5016106 One big problem with the C memory managers is that they tend to ABORT the process (not SIGAV but SIGABRT) if there is a dandling pointer - which happens sometimes, and is very difficult to track. This paranoia makes them impossible to use on production: you don't want your service to shutdown with no prior notice just because the MM complains about a pointer! Our only problem with FPC heap is that with long term servers, it tends to fragment the memory and consumes some GB of RAM, whereas a FastMM-like memory manager would have consumed much less. FPC heap memory consumption doesn't leak: it stabilizes after a few days, but it is still higher than FastMM. The problem with FastMM (both 4 and 5) is that they don't work on Linux x86_64 with FPC. This is why I proposed to help Eric about FPC support.
  6. SetEnvironmentVariable() updates the environment of the calling process only, changes are not persisted when the process exits. You don't need to update the PATH to handle that. Use SetDllDirectory() or AddDllDirectory() instead.
  7. Rollo62

    Function with just underscore _() seems to be valid

    Who said that Delphi couldn't look like C++
  8. Some of the benchmark results are just eyewash fwiw
  9. I find using unit namespaces to be extremely helpful in structuring code e.g VSoft.Core.Utils.String I can look at that unit name and know which project it's in (core) and which folder it's in (Utils). I always prefix my units (in new code, not all older code has been changed yet) with VSoft (company name) - it helps a) avoid conflicts with other libraries b) let's me differentiate between the code I wrote vs a library. I really wish all library vendors would do the same. Utils.pas in 20 different libraries is not helpful! Other than that, I will say code navigation is not one of Rad Studio's strengths.. hopefully with them moving to LSP this will improve in the next few releases.
  10. yup. And the comments are not simply stating the obvious, but rather why you put this part here and are doing this other part later. And not just for your own benefit, but for people down the road years later. Here's CASE STUDY of sorts for anybody who's interested. Grab something to drink first... I've been working on updating some code written over 10 years ago that is designed to take some scanned documents and attach them to invoices in a DB that they're related to. When done properly, the user can pull up an invoice and then view the scanned documents. The problem is to extract enough identifying data from the scanned documents to look up related records in the DB and then "attach" them somehow. There's nothing magical or proprietary here. It's just a brute-force process that anybody would have to do in similar situations. There are a bunch of offices this client has that are scattered around the country. They take orders, post them to a database, and later on send out a truck with the stuff that was ordered. Like what Amazon does -- only the Amazon folks scan a barcode and take a photo upon delivery and the whole loop is closed. These guys print out a slip of paper, pull stuff from inventory, put it on a truck, send the driver out to deliver the stuff, and he comes back and put the ticket in a pile. That pile is scanned into a single PDF file that's sent to us for processing. We split it up, scan each page, and attach them to related invoices that are already online. Easy peasy, right? HA! This application uses an OCR engine to extract a couple of numbers from each scanned image. The original author did a bunch of OCR processing all at once, and put the results into a big list of records. Then when ALL of the OCR processing was done, they came back and digested that data. I don't know why he did it that way -- it had a side-effect of producing hundreds if not thousands of files in a single folder, and anybody with much experience using Windows knows that's a recipe for a horribly slow process. Maybe they didn't process that many documents per day back when it was first written, but today it's a LOT! Part of the "digesting" looked up one of the numbers in a database in order to find an associated invoice#. If they found one, they proceeded to add a database record associating these numbers to the invoice. They went through this data in batches that corresponded to the piles of documents scanned in together in each office. Later on, they renamed the file that was scanned with the OCR, stuffed it into a Zip file, and recorded both the zip file name and the new name of the document in another record. So when you find the invoice online, you can see if it has this attachment and then pull up the attachment and view it. I found this process far more complicated than it needed to be. But after a while this approach began to make sense if you look at it as a linear process: scan, extract numbers, lookup invoices, annotate the DB records, store the scanned document. It also had a benefit that the original author did recognize, which is that a large number of these scanned documents (15%-25%) did not have corresponding invoices to associate with in the DB. (It may not have been that high originally, but it's that high today. I don't know if anybody is even aware of that.) So if the numbers didn't yield a lookup, the document was chucked aside and they just went on to the next one. There's a big problem I ran into, however: due to some as-yet undetermined issues, the documents that were scanned (and renamed) are not getting added to a zip file sometimes. Because this process was further down the assembly line from where the records were added to the database associating the extracted numbers, the zip file and filename, the original author didn't take into account what to do if a given file didn't get stored in a zip file for some reason. Oops! So users would click to view one of these files, and they'd get an error saying the system can't find the document, because it's not in the zip file where it was expected to be. Another person might take an approach where each document is scanned, it's numbers extracted, the DB looks up the invoice, and only then is it added to a zip file and saved. Each one would be processed in its entirety before the next one was looked at. There would appear to be a lot more integrity in this process because the point where the data is recorded to the DB is "closer" to when the files are added to a zip file -- so if the latter fails, the DB process can be reversed by using a "transaction". As it happens, you can't process each one completely before the next one, because some of them represent multi-page documents. We basically get piles of documents that all come from the same office, and they can be processed separately from each other pile. But within that pile, there may be 2, 3, 4, or more that are related and need to be stored together. Suffice it to say, none of this was documented in the code. There was just a rough process that I could follow, with bits and pieces sticking off to the side that didn't seem to have any relationship to anything until after observing a sufficiently large number of documents being processed and realizing they were dealing with "exceptions" of sorts. One thing I lobbied for early on was to replace the OCR engine with something newer and faster. After I did, I found all sorts of decisions that appeared to have been made because the original OCR process took so damn long -- part of which was a whole multi-threading mechanism set up to parallel process things while the OCR engine was busy. (They could have just run the OCR as a separate thread, but they dumped most of the logic into the thread as well. I think that was because it processed so fast. Dunno.) With the new OCR engine, it takes 2-3 seconds to process each document in its entirety. The old OCR engine took 2-3 minutes per document. In situations like this, people rarely bother to explain why they organized things the way they did to account for the amount of time needed to do a particular part of the process. They figure it's pretty darn obvious. Well, no it wasn't. Especially after replacing the OCR engine with a much faster one. One of the first things I did was refactor things so I could switch between OCR engines to support some kind of A/B testing. In that process, I saw how (needlessly) tightly coupled the code was to the OCR process when all it was doing was pulling out two numbers. In the original approach, there was a ton of logic associated with the OCR process that I was able to pull out into a base class (in a separate unit) and reuse with the new OCR engine. I was able to re-do the overall process to better reflect different activities, and the code became much easier to read. At this point, however, management has not given me the go-ahead to deal with the problem of losing attachments that can't be inserted into archives for whatever reason, but it won't be as hard to do now as it would be in the original code. Management thought this would be maybe a week's worth of work. I spent several weeks just figuring out how the code worked! No documentation existed on it anywhere, of course, and virtually no comments existed in the code. It's finally coming to a close a couple months later than expected, and they have no appetite to hear about the problems I ran into along the way. ANY kind of documentation would have been helpful in this case. But unfortunately, this is pretty much the norm from what I've seen over the years.
  11. Stefan Glienke

    RTTI info for TBytes using runtime packages

    Regardless the current typeinfo issue I would always check for typeKind = tkDynArray (and possibly even tkArray) and ElemType = Byte because then you can also handle any custom declared TMyBytes = array of Byte type.
  12. David Heffernan

    Parallel for 32 vrs 64bits

    The issue is that x64 trig functions are very slow for very large values. Nobody actually wants to know sin for 99999999/pi radians. Put in sensible values for the argument to sin and it looks more reasonable. For instance try using T:=Sin(i/99999999);
  13. aehimself

    Notification custom look

    I am against custom notifications. I disliked MSN messenger, early Outlook (2003 era, maybe; can't recall) now Viber and all the others doing this as it completely breaks a computers look-and-feel. Especially since later OSes have their native notification system. I half-implemented this once, but soon I realized that it's not that easy. A notification only worth it if it's above everything else on your screen - except some situations: you'll have to check for full-screen apps (gaming, video); possibly add DnD intervals and maybe allow the user that if a specific application is running - don't show anything. What happens if your notification covers the native notification or vice versa...? Would be nice to have some syncing to make sure it never happens. That's a lot of extra work, and the OS already knows it all.
×