Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 01/28/19 in all areas

  1. Uwe Raabe

    Is it really good practice to create Forms only as needed? Always?

    For these cases I prefer moving the settings to a separate settings class independent from the settings form. Then I can create and initialize an instance of this settings class at startup, load the settings from some storage and adjust with some command line parameters. Then the settings are available all over the application without having to rely on the settings form creation. I often have the case where I need access to the settings long before the main form is created. This would be hard to achieve using the settings form as the central storage for the settings.
  2. Memnarch

    Is it really good practice to create Forms only as needed? Always?

    I can tell you, if our application was creating all of its gui at startup, hell would freeze over. And your systemresources, too. The ugly thing is, Memory is not what will kill your application. The handle limit will! There is a GDI-handle limit(in addition to others) you'll run into.
  3. David Heffernan

    Is it really good practice to create Forms only as needed? Always?

    It's not about the memory. It's about removing global variables that allow form instances to poke at each other's internals. It's analogous to making fields private rather than public.  Those form variables only exist because in Delphi 1 the designers were trying to ape functionality of VB.
  4. Martin Sedgewick

    Rio quality disappoint

    Do you think some hints at compile time like "You have a lot of components on this form which may impact performance"? That seems like something that could be hinted at, and you could always switch it off. IDE Fix is a must!
  5. Stefan Glienke

    Allocation-Free Collections

    As I mentioned before the jump optimization highly depends on the CPU - and of course it only affects methods that are being called and not inlined. When inlined you have a jump no matter what. Only when being called it can benefit from two returns in each branch to avoid the jump over. Edit: Clarification - when inlined, putting the raise first is better because it only needs one conditional jump instruction (jump over raise call or not), taken in the common case. When putting it below it still has the conditional jump which is not taken in the common case but has a jump over this code in all cases when the common path is being used. What I would do regardless is to put the exception raising code into a non inlined subroutine to reduce generated binary code because then the code being jumped over is just a call (5bytes on 32bit). Another optimization is to make use of {$POINTERMATH ON} for your pointer type because then you can write: Target := @FData; Target[FCount] := AItem; And another neat trick to check if an Integer index is in range that only uses one compare: if Cardinal(AIndex) >= Cardinal(FCount) then By the way - putting all cases into the same binary often influences results and even the order in which you execute them changes the output - I had a similar case where just changing order made one or another faster or slower. Measuring performance is not as easy at it sounds :)
  6. Микола Петрівський

    Rio quality disappoint

    Lots of problems can be caused by custom components and experts if they are installed in IDE. They all run in the same address space, so a small bug in third party component can break almost anything. So if you have problems, try to disable third party packages in IDE. Another problem is when people try to edit > 10 Mb pas-files or forms with huge amount of components. In such situations IDE is almost screaming: "Do not violate coding best practices, split it in to multiple pieces".
  7. Erik@Grijjy

    Allocation-Free Collections

    You are mostly right about this. If you have a small list size, you are mostly measuring creation/destruction time. But that is exactly one of the reasons you may want to use a stack-based list instead. Although my main motivation wasn't about speed (it was more about memory fragmentation, which is hard to test as you mentioned), I added some speed tests to the repo anyway. Like you said, any speed test in this area is biased, and does not represent a real-life situation, so you should take the results with a grain of salt. I tested 4 stack list implementations against Delphi's TList<Integer>: TStackList1 is the original stack list from example 2 (with a fixed configurable size, but doesn't allow managed types). TStackList2: as TStackList1 but takes Stefan's "put exception at the bottom" suggestion into account. TStackList3: as TStackList2 but uses inlining TStackList4: as TStackList1 but uses inlining I tested these lists with a buffer of 256 bytes on Win32 using Delphi Rio. This shows how much faster each list is compared to Delphi's TList<Integer>. TStackList1: 2.78 times faster TStackList2: 2.77 times faster TStackList3: 2.81 times faster TStackList4: 3.03 times faster Note that measurements vary a bit from session to session. Sometimes, "putting the exception at the bottom" has a positive impact, sometimes it doesn't. TStackList4 is always the fastest on Win32 though, but the differences aren't that big. On Win64, the results are about the same, but Delphi's TList<Integer> seems to perform a little bit better than on Win32. Also, as you said, performance depends a lot on the list size. For a list size of 32 bytes, the stack lists are about 10x faster, but for a list size of 2048 bytes, they are only about 1.7x faster. This reinforces my suggestion that stack lists are most suitable for smaller temporary lists. For larger lists, just stick to heap-based collections instead. And you can tweak the capacity as you mentioned to avoid re-allocations. You can't really test heap usage and fragmentation with these tests. Because we are creating and destroying lists in a loop, a smart memory manager just reuses the same heap memory over and over again without fragmentation. In real-life however, there will be lots of other things going on between the recreation of lists, and fragmentation will be more likely.
  8. Stefan Glienke

    Allocation-Free Collections

    Yes, and performance of these things sometimes heavily differs between the different CPUs (for example I had a case where I got some severe slowdown in some code on my ivy bridge i5 whereas it was not there on a skylake i7 one). Anyhow not having a jump is still better than a predicted one. But that is micro optimization (which I did invest some time into as the code I was testing is core framework code, which I usually consider collection types part of).
  9. Stefan Glienke

    appending to a dynamic array

    What's also interesting about Length and High is that if you are just writing code that directly compiles into a binary they are being inlined by the compiler whereas if you are using them in units that are pre-compiled as part of a package (if you are 3rd party component vendor for example) they are not and always cause a call to System._DynArrayHigh and System._DynArrayLength. Funny enough this is not the case for the source shipped with Delphi because their DCUs are not generated from their containing packages.
  10. Well not trunk but in fact Mercedes Benz I think was the first manufacturer to put it somewhere else and often people were looking for it in the usual place and could not find it :)
×