Jump to content

Stefan Glienke

Members
  • Content Count

    1388
  • Joined

  • Last visited

  • Days Won

    134

Stefan Glienke last won the day on July 25

Stefan Glienke had the most liked content!

Community Reputation

1939 Excellent

Technical Information

  • Delphi-Version
    Delphi 10.1 Berlin

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Stefan Glienke

    Devin AI - Is it already happening?

    The last thing I know of is that Devin AI was fake (that info is from April 2024 when the entire developer community was all over the demo they showcased). Do you know more?
  2. Stefan Glienke

    how to filter on source files using VTune?

    Changing anything with the map file will only remove the names, the functions being called will still be in the sampling result because VTune is a sampling profiler. Given that obviously, your performance problem was not with these functions they should not be a significant part of the result percentage-wise - if they are, they are part of the issue.
  3. Can't be asked - I am using the sfw (safe for work) version 😇
  4. FWIW the assembly implementation of that function is pointless given you can do it in a way more readable way - also performance cannot be a reason given the following call to mem_write causes a temporary heap allocation to convert your hex_code variable into an AnsiString. Quickly slapped together (anyone who wants to further optimize this - be my guest, i cba right now): type hex_code = Array [1 .. 4] of AnsiChar; function Int2Hex(c: Word): hex_code; const HexChars: array[0..15] of AnsiChar = '0123456789ABCDEF'; var i: NativeUInt; begin i := c; Result[4] := HexChars[i and $0F]; i := i shr 4; Result[3] := HexChars[i and $0F]; i := i shr 4; Result[2] := HexChars[i and $0F]; i := i shr 4; Result[1] := HexChars[i and $0F]; end; and then instead of mem_write with AnsiString as the first argument use one that writes 4 bytes at once.
  5. Your assembly code of Int2Hex for 64bit is wrong - c is passed in RCX.
  6. Nitpick at 10:30 - strings are always empty when not explicitly initialized like all managed types.
  7. Stefan Glienke

    Addictive Software gone ?

    Please see the first entry that Google provides when searching for "Addictive Software" When looking for spell-checking alternatives - there is also this thread:
  8. Stefan Glienke

    Parallel.ForEach is really slow

    I can absolutely repro - all my 20 logical cores (i5-13600k) go to 100% for 10 seconds. Running it through SamplingProfiler right now to check. Edit: Okay, either this has regressed at some point after the demo was originally built or it was overlooked that there is actually no real workload inside of the delegate and thus it just measures the huge overhead from RTL and interlocked operations. It's just spending a huge amount of time wrapping and unwrapping the integer from TOmniValue and sharing the counter across all threads causing a complete bus lock every time due to the usage of DSiInterlockedExchangeAdd64 (*). (*) I wrote bus lock and this is not entirely true, so before anyone chimes in quoting the Intel manual about the fact that the lock prefix does not automatically cause a bus lock - you are correct. Here we have the case that we are sharing the one variable across all the cores we have so it has to travel back and forth the CPU caches to and from RAM. This code as is would be a nice worst-case example for Primoz' book about what can potentially go wrong when doing parallel programming. However: keep in mind that we don't have any real workload which would most likely change the picture as the workload usually takes the majority of processing time and not the parallel foreach code. P.S. Among the aforementioned things it looks like the OTL code (particularly TOmniValue) is suffering from some overhead caused by inlining. For example: because TOmniValue.AsInt64 as well as TOmniValue.TryCastToInt64 is inlined it causes the temporary Variant variable it needs for the otvVariant case to be spilled into the location where the inlining happens. But in our case we never have to even deal with a Variant hence all the extra code the compiler produces is just overhead. And because the getter for AsInt64 is used twice, the compiler repeats the inlined code twice and produces two Variant variables which it finalizes using System.FinalizeArray. Also a lot of record initialization and finalization is happening which I assume (did not look closer) is being caused by TOmniValue - potentially also because of some temporary compiler generated variables. Here is the drilldown of the interesting pieces when running in SamplingProfiler:
  9. Stefan Glienke

    rease ... at ReturnAddress

    No, you can even see the stackframe settings in System.pas because it explicitly specifies them at the very beginning. Furthermore, even without explicitly enabling them, the RTL is being compiled with $W- It's a compiler implementation that it enables stackframe for any function that uses ReturnAddress - you can double-check that for yourself by compiling the following code and looking at the disassembly for Foo: {$STACKFRAMES OFF} procedure Bar(p: Pointer); begin end; procedure Foo; begin Bar(returnAddress); end; begin Foo; end. I know this because I was using ReturnAddress in my code and it behaved wrong in XE where it was implemented explicitly as I linked in my post above but without stackframe, this returned a wrong address. This is why I explicitly enable stackframes for the code that uses this function in XE - see https://bitbucket.org/sglienke/spring4d/src/2dbce92195d699d51fc99dd226c4698748ec8ef9/Source/Base/Spring.pas#lines-3474
  10. Stefan Glienke

    rease ... at ReturnAddress

    XE2 - see https://bitbucket.org/sglienke/spring4d/src/2dbce92195d699d51fc99dd226c4698748ec8ef9/Source/Base/Spring.pas#lines-3140 Anyhow the ReturnAddress function would not help here because it would point at the location in System._AbstractError which is the function that calls AbstractErrorProc. This improved handler is a highly brittle hack that might work depending on your compiler settings. It does not force specific compiler settings for the u_dzAbstractHandler.pas unit which also changes where the return address 2 calls up is to be found and it also does not even work for x64. I also don't share Thomas' assessment that this change came in Tokyo but I rather suspect that his stackframe settings were different between his tries on different Delphi versions. The code that is responsible for the abstract error did not change between Delphi XE (the oldest version I can check right now) and 12. The best way would probably be to use Caller(2) from JclDebug which does a proper stack walking to determine the return address.
  11. Stefan Glienke

    Refactoring in Delphi

    At least not if they promote that developer to PM
  12. Stefan Glienke

    Delphi on Surface Pro with Qualcomm CPU?

    You might have misunderstood my sentence because obviously I was referring to any Delphi compiler that might target ARM - have you looked into the JIRA reports I linked?
  13. Stefan Glienke

    What are the performance profilers for Delphi 12?

    I'll take that opportunity and talk about it for an entire session at the Delphi Day and the Delphi Summit (the schedule still lists me talking about spring4d but for once I pass on that topic) 😉
  14. Stefan Glienke

    Delphi on Surface Pro with Qualcomm CPU?

    Given the current issues regarding optimization that all LLVM-based Delphi compilers (that is all but the two Windows ones) have I am tempted to say that an x86 or x64 binary using the emulation layer might be faster than what a compiler that directly targets ARM would produce today. There are multiple reports about this and it boils down to "need to migrate to a newer LLVM version" which we have been told for years now - since recently the C++ Builder side was migrated to a recent LLVM version I hope that now the Delphi side gets addressed. https://quality.embarcadero.com/browse/RSP-9922 https://quality.embarcadero.com/browse/RSP-17724 https://quality.embarcadero.com/browse/RSP-25754 https://quality.embarcadero.com/browse/RSP-28006
  15. There are multiple considerations - I don't know what compiler version, target, and option he was using to conclude that it will use a lea rather than shift - at least with -O3 it will use shift for multiplications by 4 and 8 although lea would also be applicable - for mul by 2 most likely add is being emitted because that is just the smaller instruction. Another consideration is if the value is needed further - for example, x * 7 is implemented as x * 8 - x - and here it cannot use shift for the * 8 because it needs the original value of x to subtract, therefor it uses lea to store the result in another register and subtract the original register from it. Regarding the LEA instruction - I just remembered that I also reported that it should utilize this instruction when doing pointer math - see https://quality.embarcadero.com/browse/RSP-34820
×