Jump to content

Stefan Glienke

Members
  • Content Count

    1428
  • Joined

  • Last visited

  • Days Won

    141

Everything posted by Stefan Glienke

  1. Stefan Glienke

    GExperts Grep can use the MAP file

    Fair enough - I would have used a different approach though that does not involve having to compile the project by recursively collecting all used units by analyzing their uses clause.
  2. Stefan Glienke

    GExperts Grep can use the MAP file

    Please explain - what does the map file have to to with finding out units that are included in the project via search path? It would have to parse the uses clauses and then look for those files in the search path directories - figuring out if a unit is actually being used because the compiler might find a dcu instead is left as in improvement.
  3. Stefan Glienke

    Check for override

    I am not going to search through JIRA right now but JSON Serialization and the years of aftershocks of Generics.Collections refactoring come to mind. Or negative performance impact through changes in System.pas routines because of some feature that I might not even use (I remember some occasion where some dynamic array handling routine suddenly got severely slower) I am not using FireMonkey myself but I know there were numerous of breaking changes - for the better probably but there it obviously was not such a problem to let the users bite the bullet and adapt. Then some useful features in the IDE got missing during the theming rework (like additional context menu items in the editor tabs like open in explorer or showing the save state of those units) Can probably find more examples by searching for "regression" in JIRA. This is just a general overview of my feelings I have with every new version - crossing fingers nothing important broke. And this should not be.
  4. Stefan Glienke

    Check for override

    I did exactly that. And that is why I wrote "a clear guide at hand how to solve it" - breaking changes without a migration guide are bad. I have seen many occasions where Borgearcadera justified not fixing bugs with "backwards compatibility" to keep broken code working and push the need to keep writing code in a broken way (design wise or literally) or added ridiculous workarounds into the code to fix something but without the necessity for anyone to fix their code - and I am not talking about subtle and hard to track things. But then on the other hand break things every other version just because...
  5. Stefan Glienke

    Check for override

    "Backwards compatibility" is the ultimate excuse to pile up garbage in your backyard ... It is used or ignored whenever convenient - moving forward also includes getting a compile error in your face but with a clear guide at hand how to solve it. If you ever inherited from a TDataSet and used one of its method that have TBookmark or TRecordBuffer arguments while writing code for different Delphi versions since 2010 or so you know what I mean. But some developers seem to rather want to save an hour when moving their code to a new version and waste hours or days later hunting down a bug. 😉
  6. Stefan Glienke

    Check for override

    If the method you avoid to call would do heavy work... but even using the fastest approach to check if the method is overridden will be slower than just doing a virtual call on a method that just returns False. And even if you perform that check only once and store it in a boolean field it's just not worth it imo.
  7. Stefan Glienke

    Check for override

    IsVirtualMethodOverride from Spring.VirtualClass.pas procedure TAncestor.Do; begin Writeln(IsVirtualMethodOverride(TAncestor, ClassType, @TAncestor.MethodX)); end;
  8. Stefan Glienke

    Rapid generics

    All this drivel that happens every time when David and Rudy collide is rather annoying to me and totally went off topic long ago. But that's what you can ignore content for. I would appreciate if you as a mod would cut that out and move it to it's own thread.
  9. Stefan Glienke

    Rapid generics

    But we have the slow RTL routines that can easily be improved - we don't have the new ctors and dtors and actually I would not hold my breath for them to be implemented as you imagine them to - they will be driven by additional typeinfo (look into 10.3 System.pas where you can find all the relevant code for that feature because they just disabled it inside the compiler but did not revert the necessary RTL code). And still if you have an array of those record it would still have to loop through the array and call the dtor for every single item regardless. An optimized version of FinalizeArray/Record can just shift pointers over the array and do the cleanup - even if using the records managed field table - that is just a simple record structure, nothing fancy. Putting everything into nested loops and calls regardless the fact if the fields are even filled with something is what makes the current version slow - that is as I mentioned before what makes Rapid.Generics faster on its Clear.
  10. Stefan Glienke

    Rapid generics

    Write a better _FinalizeArray routine for tkRecord then please as the current implementation is pretty terrible as it keeps calling _FinalizeRecord in a loop which again calls _FinalizeArray with ElemCount 1. That contributes to the slowness of Generics.Collections if you have records in a list. I did some patching today and added an ElemCount parameter to _FinalizeRecord to only call that once per _FinalizeArray - but did that in pure pascal.
  11. Stefan Glienke

    Rapid generics

    Storing records in collections is a delicate topic as usually the code is a bit more complex than just storing integers or pointers. Even more so if the record has managed types like string. In this case the simple fact of having a local variable of T inside your generic code and doing one assignment more than necessary (because for example passing some olditem to a notification) might cause a severe slow-down. If you put such code into a single method it does the stack cleanup and finalization for that variable in all cases even if there is no notification to be called. The code in Rapid.Generics is using some shortcuts and produces even more convoluted code than System.Generics.Collections has since its refactoring in XE7. It does for example not use TArray<T> as backing storage for its list but pure pointer math. It also does not zero memory for this array which buys some speed by not doing the round trip to those all the code happening in System.DynArraySetLength - especially for managed types. That buys a bit of performance when adding items - especially if you don't set the capacity before. I know that the RTL collections had a ton of bugs caused by that refactoring as certain typekinds suddenly were not handled properly - I don't see any unit tests for Rapid.Generics though so I would not say that they are working for all kinds of types that you might store in those lists. As for the specific case of Clear taking longer in the RTL collections that is another optimization being done in Rapid.Generics where it simply cleans up the memory and is done whereas the RTL runs through some extra code which is not necessary if there is no OnChange attached to the list. Edit: I looked into the Rapid.Generics code for records and it maintains its own mechanism to cleanup any managed fields with a small performance improvement if the field is actually empty. This causes a major speedup if you are doing some benchmarks with empty records but I guess with real data this won't give much. I tested with a small record with 2 string fields and an integer and when they were empty the Clear call was very fast compared to the RTL list but not so much anymore when the fields contaied some strings that it had to cleanup.
  12. Stefan Glienke

    Rapid generics

    Well this still does not tell anything about how the list is being used, What their element type is or if they have a change notification attached (because all that influences the performance). I am asking because I have been spending quite some time looking into the rapid.generics code and the 10.3 system.generics.collections improvements and did some own for spring4d. And what makes testing with isolated (micro)benchmarks kinda difficult is the fact that often enough the hardware effect kicks in and shows you big differences that are just there in this benchmark but in real code completely irrelevant or even contradictory.
  13. Stefan Glienke

    Rapid generics

    Show the benchmark code please - I have seen so much flawed benchmarks (including some of my own) in the past months that I don't believe posted results anymore.
  14. Stefan Glienke

    10.3.1 has been released

    The bad thing about the IDE theming is that it's not even using the default VCL theming but a different one hacked into the IDE code because they did not find a better way than that to avoid the form designer being themed. That and doing obvious bad custom drawing such as the search bar in the title bar which keeps jumping around when you move the window. Also not a fan of the dark theme in RAD Studio - in Visual Studio and Visual Studio Code I use it all the time - simply because its the default and it works just nice - the default colors in the code editor fit nicely and the entire IDE is still very reactive. And even though there are also places where some UI controls are not themed in dark (like the project properties) it still works well together without cut off controls and useless scroll bars because some control is 2 pixels to wide.
  15. Stefan Glienke

    Rapid generics

    Google for C++ template code bloat and you see that they also suffer from the very same problem including ridiculously long compile times and memory consumption. The suggested approach is very similar to what has been done in System.Generics.Collections. Delphi however adds some more problems into the mix like having RTTI turned on by default which is extremely problematic if you have extensive generic class hierarchies. For generic fluent APIs that return generic type which have many different type parameters because of the way the API is being used this can turn a few hundred lines unit into several megabytes dcu monsters (the size itself is not the problem but the compiler churning on them for a long time). If you multiply that with other factors it turns compiling a 330K LOC application into a minute or more while consuming close to 2 GB RAM and producing 250MB of dcus and 70MB exe. These are real numbers from our code and an ongoing refactoring of both sides - library code that contains generics (Spring4D) and calling side reduces this significantly.
  16. Stefan Glienke

    We use DUnitX and it discovers all our silly mistakes before release

    That it what I looked into some years ago and started refactoring out some for me unnecessary code and cruft. You have to know that a coverage profiler is basically a debugger that puts a breakpoint into every single executable line (or at least those where you specified you are interested in). Now finding out what lines that are is the interesting part - the other is presenting the data nicely inside the IDE when executed under TestInsight (at least for me).
  17. Stefan Glienke

    AV on finalizing TThreadPool [PPL]

    And not your job but Embarcadero's :)
  18. Stefan Glienke

    AV on finalizing TThreadPool [PPL]

    Looks to me very much like a timing problem on shutdown. As soon as I debug or cause some slight delay before the end the AV disappears.
  19. Stefan Glienke

    IDE Launchpad

    Is it worth the time?
  20. Stefan Glienke

    operator overloding Equal vs Multiply

    This clearly seems to be a glitch in the compiler when doing the operator/overload resolution where it does find a proper match for Equal/NotEqual (did not test all possible operator combinations but those two).
  21. Stefan Glienke

    Rapid generics

    System.Generics.Collections does not cause that much of a code bloat since the refactorings in XE7 - however it still causes more than it should but that is the limitation of the compiler. I did some tests with Rapid.Generics and while they are optimized for some scenarios it was not a stellar improvement over System.Generics.Collections in 10.3. And while I was doing benchmarks of those and Spring4D collections I saw that isolated benchmarks are often very much affected by certain CPU specifics - on different CPUs depending on their (non documented) behavior of the branch predictor and of course in a microbenchmark chances are high that all code fits into at least L2 cache.
  22. Stefan Glienke

    (Mis-)Behaviour of TStringHelper

    Sorry I meant startIndex + Length(searchText) - 1 because the method simply works as follows: it starts to search at startIndex but does not consider any character in the string after that. That causes it not finding any searched text that extends beyond that index. So as Christian already stated it basically works as if you cut the string after startIndex. So what the documentation is simply missing is the fact that it would not find any occurence that extends past startIndex when you use any of those overloads, that's all.
  23. Stefan Glienke

    (Mis-)Behaviour of TStringHelper

    The documentation is not precise enough - but fwiw the implementation as in Delphi is the same as it is in .NET or Java. The startindex actually also limits the length of the searched string. So any search text longer than 1 will only be found at startIndex - (Length(searchText) - 1) or left to that. And when Cristian said "first occurence" he actually meant "first occurence from the right" of course.
  24. Stefan Glienke

    Delphi pitfalls: Enumerated types and for loops

    When used in a set, the ordinal value of the enum is basically the index of the bit used within the set and as they can only be 256 bit in size any ordinal value above 255 prevents an enum being used as a set. Personally I think you should rather avoid giving enums any ordinal values and only ever use them when using some interop with another system where they are used for.
  25. Stefan Glienke

    We use DUnitX and it discovers all our silly mistakes before release

    If we just had some easy to use and nicely integrated into the IDE way to measure code coverage ...
×