Jump to content
mitch.terpak

Delphi 12.0 TParallel.For performance. Threading.pas issues

Recommended Posts

We were pleased to hear that version 12.0 addressed the deadlock issue. However, we've encountered three major issues since upgrading:

 

 

1. Unfortunately, we've observed a 40% performance decrease with the updated threading.pas in version 12.0, which is unacceptable for our needs. The root of the problem appears to be the ThreadPool growing beyond the max workers we set in our custom TThreadPool. This necessitates the creation of new copies for the new thread within TParallel (we manage necessary copies dynamically using a Dictionary with TThreadID as key). The issue in threading.pas seems to stem from this property being explicitly set to true in TThreadPool.Create:
 

constructor TThreadPool.Create;
begin
  inherited Create;
  // Initialization code
  FUnlimitedWorkerThreadsWhenBlocked := TRUE;
  // More initialization code
end;

despite already having a default value or having been set to False:

    property UnlimitedWorkerThreadsWhenBlocked: Boolean read FUnlimitedWorkerThreadsWhenBlocked
      write FUnlimitedWorkerThreadsWhenBlocked default True;

This leads to the TThreadPool.TThreadPoolMonitor.GrowThreadPoolIfStarved procedure unnecessarily increasing the number of worker threads. When I added debugging, I noticed it enters the else if FThreadPool.UnlimitedWorkerThreadsWhenBlocked block multiple times per TParallel.for loop.

 

2. In previous versions of Delphi, we encountered access violations when closing the debugger if we used TParallel.For without modifications (with the default thread pool). Switching to custom thread pools mitigated this issue, but with version 12.0, the problem seems to persist regardless. The access violation in 12.0 (using a custom thread pool) often occurs here:

procedure TThreadPool.TThreadPoolMonitor.Execute;
begin
  // Procedure code
  Signaled := FThreadPool.FMonitorThreadWakeEvent.WaitFor(TThreadPool.MonitorThreadDelay) = TWaitResult.wrSignaled;
  // More procedure code
end;

I've also seen it happen on different lines within the same procedure. We can only reproduce this in our VCL application. Our DUnit test project does not have the same issue

 

3. With the debugger attached, any operation involving TParallel.For runs excruciatingly slow—about 8-10 times slower compared to twice as slow in version 11.3.


For full disclosure and additional information we're using https://github.com/pleriche/FastMM5 but none of my reported issues change with it. The base runtime just becomes a lot slower with it, because the built in FastMM4 doesn't do great with our workload. 

 

We're reaching out for insights or solutions to these issues, as the current performance and stability impacts are significant for our projects.

Edited by mitch.terpak
  • Sad 1

Share this post


Link to post

I haven't analyzed the problem at all, and I have no solution for you, but a few things struck me when I read the description of your problems.

 

21 hours ago, mitch.terpak said:

The issue in threading.pas seems to stem from this property being explicitly set to true in TThreadPool.Create:

[...]

despite already having a default value or having been set to False:


property UnlimitedWorkerThreadsWhenBlocked: Boolean read FUnlimitedWorkerThreadsWhenBlocked
      write FUnlimitedWorkerThreadsWhenBlocked default True;

 

  1. The default property directive only has meaning for published properties and is only used by the VCL streaming mechanism to determine if a property needs to be written to the DFM or not. It has no practical purpose here. I can only think they included it so it's easier to see what the default value is when looking at the source.
  2. It doesn't already have a default value. The default value is the one being set in the constructor, hence: True.
  3. It isn't set to False. If you are setting to False, then it must be after the constructor has executed.

 

21 hours ago, mitch.terpak said:

This leads to the TThreadPool.TThreadPoolMonitor.GrowThreadPoolIfStarved procedure unnecessarily increasing the number of worker threads.

You do you believe that the worker threads being increased is unnecessary?

 

I don't know but I would guess that growing the pool beyond the limit is done to try to avoid the deadlock that could occur in some scenarios if the limit it was a hard limit (e.g. all threads blocked waiting for something that can only be produced by another thread which can't be created because the limit has been reached).

 

21 hours ago, mitch.terpak said:

The access violation in 12.0 (using a custom thread pool) often occurs here:

Exactly where? If it's an access violation then there must be an invalid pointer. I can see 3 different candidates in that line of code: Self, FThreadPool and FThreadPool.FMonitorThreadWakeEvent.

You should be able to determine which one it is with the debugger.

Share this post


Link to post
1 hour ago, Anders Melander said:

haven't analyzed the problem at all, and I have no solution for you, but a few things struck me when I read the description of your problems.

 

1 hour ago, Anders Melander said:
  • The default property directive only has meaning for published properties and is only used by the VCL streaming mechanism to determine if a property needs to be written to the DFM or not. It has no practical purpose here. I can only think they included it so it's easier to see what the default value is when looking at the source.
  • It doesn't already have a default value. The default value is the one being set in the constructor, hence: True.
  • It isn't set to False. If you are setting to False, then it must be after the constructor has executed.

Thank you for your insights regarding property defaults and constructor behavior. While helpful, the core issue remains with the system library's thread pool expansion beyond the specified limit for no clear reason. This over-sensitive mechanism to prevent deadlocks, where none exist in our TParallel.for implementation, suggests a misidentification problem. This is a blocker for upgrading to Delphi 12 for us.

 

1 hour ago, Anders Melander said:

You do you believe that the worker threads being increased is unnecessary?

 

I don't know but I would guess that growing the pool beyond the limit is done to try to avoid the deadlock that could occur in some scenarios if the limit it was a hard limit (e.g. all threads blocked waiting for something that can only be produced by another thread which can't be created because the limit has been reached).

Regarding the access violation, despite all components appearing correctly referenced, the exact cause remains unclear. As this involves a system library, it's beyond our scope to debug extensively due to professional constraints and the fact that editing system libraries is something we'd like to avoid.

 

1 hour ago, Anders Melander said:

Exactly where? If it's an access violation then there must be an invalid pointer. I can see 3 different candidates in that line of code: Self, FThreadPool and FThreadPool.FMonitorThreadWakeEvent.

You should be able to determine which one it is with the debugger.

The goal of my post is to highlight these blockers to upgrading to Delphi 12, hoping for a resolution. Detailed debugging is beyond our capacity, what I'd like are actionable solutions from the system's developers or having this be a priority for the next patch. I appreciate any further assistance or insights to address these critical issues.

Share this post


Link to post

Okay then. I doubt you will have any luck with getting this resolved unless you can provide additional details - which will require an effort on your part.

 

Also, Embarcadero doesn't run this forum and probably doesn't follow what goes on here. If you want them to take notice of the problem you will have to report it to them (which isn't possible at the moment).

  • Like 1

Share this post


Link to post
13 minutes ago, Anders Melander said:

you will have to report it to them (which isn't possible at the moment).

Yeah...

This seemed like a good idea to at least raise awareness.

 

14 minutes ago, Anders Melander said:

Okay then. I doubt you will have any luck with getting this resolved unless you can provide additional details - which will require an effort on your part.

The details are clear. The deadlock detection is way too sensitive. And quite frankly its just not implemented properly. The deadlock fallback is being triggered within milliseconds of a TParallel.For starting. That's a very clear issue. If there's specific details missing I can of course do my best to deliver them.

Share this post


Link to post
24 minutes ago, mitch.terpak said:

Thank you for your insights regarding property defaults and constructor behavior. While helpful, the core issue remains with the system library's thread pool expansion beyond the specified limit for no clear reason.

The reason for expanding number of threads is to avoid deadlock with nested TParallel.For loops. 

 

If you want to use old behavior then you should set UnlimitedWorkerThreadsWhenBlocked to False on each ThreadPool you are using, including the default one.

 

For any other issues you mention, there is nothing we can do if you don't provide details about your code and implementation of your parallel loop mechanisms. While it is always possible that there are other bugs in System.Threading, it is also quite possible that your code has bugs, too. Just because you get access violation in Delphi RTL code, that does not mean that the cause is not in your code.

 

For debugging multithreaded applications, especially the ones that spawn many threads, it is more suitable to use logger and inspect code behavior through logs instead of using debugger. Debuggers are usually too intrusive for multithreaded debugging that involves more that few of background threads as it severely impacts timing. 

  • Like 1

Share this post


Link to post
26 minutes ago, Dalija Prasnikar said:

The reason for expanding number of threads is to avoid deadlock with nested TParallel.For loops. 

 

If you want to use old behavior then you should set UnlimitedWorkerThreadsWhenBlocked to False on each ThreadPool you are using, including the default one.

There's no nested TParallel.For's though.
 

26 minutes ago, Dalija Prasnikar said:

For any other issues you mention, there is nothing we can do if you don't provide details about your code and implementation of your parallel loop mechanisms. While it is always possible that there are other bugs in System.Threading, it is also quite possible that your code has bugs, too. Just because you get access violation in Delphi RTL code, that does not mean that the cause is not in your code.

Agree, definitely. My point is that these things seem to occur going from 11.3 -> 12.0. And my personal assessment would be that the deadlock check is triggered too easily.

 

Here's the slimmed down version of what we're doing. Note that a large performance drawback occurs when the maxworker threads get put to 40 for example. Then we automatically make 22 more copies of everything we need because the ThreadID key isn't in the dictionary containing the reusable LoadflowObject

function LoadflowMainParallel: boolean;
var
   // Core threading and control variables
   CoreCount: integer;
begin
   Result := FALSE;

   // Determine optimal number of threads based on system capabilities and specific workload
   CoreCount := TThreadpool.Default.MaxWorkerThreads div 2; // MaxWorkerThreads is always threads x 2, this is an easy way to get threads on Windows/Linux
   
   // Adjust CoreCount for efficiency based on ID4 calculation results
   // Example: Use fewer cores for smaller workloads to avoid overhead

   // Setup thread pool with optimized number of worker threads
   TThreadUtils.Instance.GetLoadflowThreadpool.SetMaxWorkerThreads(CoreCount); // For example 18 out of 20 available
   TThreadUtils.Instance.GetLoadflowThreadpool.SetMinWorkerThreads(CoreCount); // For example 18 out of 20 available

   // Parallel execution of loadflow calculations
   TParallel.For(iLF_begin, iLF_eind,
      procedure(iLF: integer)
      var
         // Threading-specific loadflow objects
         LoadflowObject: TLoadflowObject;
      begin
         // Check and/or initialize LoadflowObject for the current thread
         // Add them to a Dictionary with ThreadID as key
         // Perform network-related calculations and adjustments
         
         // Core loadflow calculation per iteration
         // There's no more threading inside the calculation. The only thing noteworthy that happens is calling a C++ DLL to speed up specifc functions. To avoid the slowdown that Delphi experiences when compiled to Linux (but that's another issue)
         // Conditional adjustments and post-calculation cleanup
      end, TThreadUtils.Instance.GetLoadflowThreadpool);

   // Cleanup and finalization of resources used in parallel calculations
   // Set result to indicate success or failure of the operation
   Result := TRUE;
end;

For clarification Instance is just a singleton pattern.

GetLoadflowThreadpool returns a custom TThreadpool that's made with a double-locking pattern if it doesn't exist

 

function TThreadUtils.GetLoadflowThreadPool: TThreadPool;
begin
  if not Assigned(LoadflowThreadPool) then
  begin
    aThreadLock.Enter;
    try
      if not Assigned(LoadflowThreadPool) then
      begin
        LoadflowThreadPool := TThreadPool.Create;
        LoadflowThreadPool.SetMinWorkerThreads(TThreadPool.Default.MaxWorkerThreads div 2);
        LoadflowThreadPool.SetMaxWorkerThreads(TThreadPool.Default.MaxWorkerThreads div 2);
      end;
    finally
      aThreadLock.Leave;
    end;
  end;
  Result := LoadflowThreadPool;
end;

But perhaps this might triggers the nested behavior

aTask:=TTask.Run( // Make this async so we can start the calculation while using ShowModal
      procedure
      begin
         LoadflowMainParallel();
         // Close progress form (so we don't hang on ShowModal)
      end,TThreadUtils.Instance.GetSmallThreadpool);

   TCalculationProgressForm.GetInstance.ShowModal;

   aTask.Wait();

   TThreadUtils.Instance.RepeatedTask.Cancel; // Async repeated task that updates the progress bar every 250ms until cancelled

 

Edited by mitch.terpak

Share this post


Link to post

I cannot comment on deadlock trigger being too sensitive as I haven't explored that part of code and its behavior much and whether solution for the deadlock by expanding threads is the most appropriate one. 

 

However, you are using separate pool for your parallel for loop. So there is no danger of it deadlocking. You just need to set UnlimitedWrokerThreadsWhenBlocked to False and that pool will be restricted to the number of threads you have specified.

 

function TThreadUtils.GetLoadflowThreadPool: TThreadPool;
begin
  if not Assigned(LoadflowThreadPool) then
  begin
    aThreadLock.Enter;
    try
      if not Assigned(LoadflowThreadPool) then
      begin
        LoadflowThreadPool := TThreadPool.Create;
        LoadflowThreadPool.SetMinWorkerThreads(TThreadPool.Default.MaxWorkerThreads div 2);
        LoadflowThreadPool.SetMaxWorkerThreads(TThreadPool.Default.MaxWorkerThreads div 2);
        LoadflowThreadPool.UnlimitedWorkerThreadsWhenBlocked := False;
      end;
    finally
      aThreadLock.Leave;
    end;
  end;
  Result := LoadflowThreadPool;
end;

Of course, this might not be enough if you have other pools that run intensive tasks that can create excessive number of threads which can slow down everything.

 

Share this post


Link to post
38 minutes ago, Dalija Prasnikar said:

You just need to set UnlimitedWrokerThreadsWhenBlocked to False and that pool will be restricted to the number of threads you have specified.

I know, but that doesn't solve the issue with debugger performance and access violations. So our current solution is to not upgrade to 12.0. But yes you're right in the initialization of the pools I should just adjust it in 12.0. And I would have to be mindful no nested TParallel.For's occur. (Sidenote: often nesting TParallel For's ain't that useful anyway, a specific work that can't be made parallel at a higher level is rare, honestly I can't think of many realistic use cases where this would be the most performant solution)

 

Also part of my remark is just that we identified this issue. And it does probably mean that the system thinks it's "blocked" when it ain't. Which might also impact other Delphi Pascal users. While for us there's a clear potential workaround. I do think the way this deadlock has been "fixed" may potentially not be optimized properly.

41 minutes ago, Dalija Prasnikar said:

Of course, this might not be enough if you have other pools that run intensive tasks that can create excessive number of threads which can slow down everything.

It's just this one and there's an async thread that updates the progressbar but that's insignificant. The mainthread is on a ShowModal (progressbar). We're mainly using TParallel.For for it's workstealing capabilities. And on PC's with more then 16 threads we're intentionally not using two because it can help make the system feel more responsive while running the calculation.

Share this post


Link to post

Is it correct that each of your threads adds its own ThreadID to a main-thread Dictionary?  Could there be any race issues there, or is access to it via some type of lock?

Share this post


Link to post

The performance issue may be resolved by replacing:

      // around line # 4383
      CurMonitorStatus := FThreadPool.FMonitorThreadStatus;
      if Signaled then
        Continue;

with:

      CurMonitorStatus := FThreadPool.FMonitorThreadStatus;
      if Signaled then
      begin
        FThreadPool.FMonitorThreadWakeEvent.ResetEvent;
        Continue;
      end;

 

  • Like 1
  • Thanks 2

Share this post


Link to post
12 hours ago, pmcgee said:

Is it correct that each of your threads adds its own ThreadID to a main-thread Dictionary?  Could there be any race issues there, or is access to it via some type of lock?

Great question. It's indeed missing quite a bit of the actual implementation. Basically we're presizing the Dictionary to assure that in no circumstance at all it would be prompted to resize. Then we use a custom hash just to be extra sure that collisions from the ThreadID's are extremely unlikely. We used to have a CriticalSection before adding the LoadflowObject to the Dictionary. But after thorough testing, Infinite loop left running for days on an unused laptop that would continuously call TParallel.For with this Dictionary pattern has given me extremely high confidence in the addition of the item being threadsafe and not prone to race conditions. Technically I could see an hash being produced from an ThreadID that's so similar that it'd be inserted at the same index, but there's collision detection for that in the Dictionary itself, so it'd really have to be a simultaneous insert. The performance increase of removing the CriticalSection was about 3-4% for a standard calculation, so together with the testing with deemed it worth. Furthermore in practice we also see the threads being spun up with a slight delay, which further makes it unlikely that this collision occurs. We add the pointer to the Dictionary right away, so the natural delay that occurs with a TParallel.For is also contributing to the consistency of this implementation.

And obviously the advantage is a dynamic high performance implementation that scales well across whatever amount of threads the system has. We for example had issues in the past where we chunked the calculations based on the amount of threads. But we would find on devices with 20+ threads the first few threads would already be "done" with the problem before the last ones started. Adding a few hundred milliseconds of extra duration to the calculations. 

 

Let me know if you have more questions about this.

 

Share this post


Link to post
10 hours ago, Dmitry Arefiev said:

The performance issue may be resolved by replacing:


      // around line # 4383
      CurMonitorStatus := FThreadPool.FMonitorThreadStatus;
      if Signaled then
        Continue;

with:


      CurMonitorStatus := FThreadPool.FMonitorThreadStatus;
      if Signaled then
      begin
        FThreadPool.FMonitorThreadWakeEvent.ResetEvent;
        Continue;
      end;

 

I tested this and it indeed works well for the smaller problems I have. It still seems to trigger:

 else if FThreadPool.UnlimitedWorkerThreadsWhenBlocked then
        begin
        writeln('entered');
          for i := 1 to Min(FThreadPool.FQueuedRequestCount, FThreadPool.FMaxLimitWorkerThreadCount div 2 + 1) do
            FThreadPool.CreateWorkerThread;
        end;

While there seems no reason for that. Which does end up being a problem for me on the larger problem sizes. (since it will still add new worker threads with different ThreadID's for which copies will have to be made). But I can confirm that it cuts the issue down to only a 15% performance loss in our benchmark opposed to 40% with UnlimitedWorkerThreadsWhenBlocked on true.

 

I also got the ACCESS_VIOLATION again while attempting to test the performance with the debugger. This time self was inaccessible and it occurred on. I also had it happen for the first time ever in our DUnitX (console):
 

      FThreadPool.FCurrentCPUUsage := TThread.GetCPUUsage(CPUInfo);



Name    Value
Self    Inaccessible value
I    10
CPUInfo    (594602031250, 37442656250, 616291093750, 0)
CpuUsageArray    (5, 4, 5, 6, 5, 4, 4, 5, 5, 5)
CurUsageSlot    3
ExitCountdown    37
AvgCPU    48
CurMonitorStatus    [Created]
Signaled    False
 

Share this post


Link to post
11 hours ago, Dmitry Arefiev said:

The performance issue may be resolved by replacing: 

If so then it would probably be better if they implemented a proper rate-limiting mechanism.

 

Looking at the Threading unit, it's rare to see professional code with this few comments. Is there some sort of rule within Embarcadero against commenting the code?

  • Like 2

Share this post


Link to post

FYI there is a reported issue on QP regarding the mentioned bug. It's a shame it isn't mentioned anywhere, I accidentally came across it by scrolling through the issues.

  • Like 2

Share this post


Link to post
2 minutes ago, havrlisan said:

FYI there is a reported issue on QP regarding the mentioned bug. It's a shame it isn't mentioned anywhere, I accidentally came across it by scrolling through the issues.

Note that the person replying there Dmitry Arefiev is the same person that mentioned the solution here.

  • Like 3

Share this post


Link to post
24 minutes ago, Anders Melander said:

Looking at the Threading unit, it's rare to see professional code with this few comments. Is there some sort of rule within Embarcadero against commenting the code?

Considering the 1337 attitude of some developers ("well written code is self explanatory") they might just have the best Delphi coders in the world... however, seeing as the RTL and VCL are very often a source of inspiration and for learning, I would also appreciate more well written comments. Perhaps not line for line, but at least stating the intent of the following code block or the reasoning why this way was chosen over another.

  • Like 4

Share this post


Link to post
50 minutes ago, mitch.terpak said:

Let me know if you have more questions about this.

No questions.

 

Just a comment. This is not thread safe code. No amount of testing can prove that code is thread safe, you can only be lucky enough to catch the issue and prove it is not safe. But, just because it seems fine in your testing, does not mean it would never blow up.

  • Like 2

Share this post


Link to post
5 minutes ago, Sherlock said:

("well written code is self explanatory")

The only self explanatory code is the trivial one. In other situations there is no such thing as self explanatory code, especially if it also needs to be performant and thread-safe.

 

I have written tons of self explanatory code, for which now I don't know what was I smoking at the time. Sometimes, after a lot of digging I can remember why it was written like that and why it should stay that way, but mostly no. So I am not sure if it was just crappy code from the times I didn't know better, or there are valid reasons behind the awkward design choices.

  • Like 9
  • Haha 2

Share this post


Link to post
1 minute ago, Dalija Prasnikar said:

No questions.

 

Just a comment. This is not thread safe code. No amount of testing can prove that code is thread safe, you can only be lucky enough to catch the issue and prove it is not safe. But, just because it seems fine in your testing, does not mean it would never blow up.

Not to justify it, you're right, it's technically not thread-safe. But the odds for it going wrong are very low, if it doesn't cause an exception then it might cause a 300mb memory leak when it goes wrong. It's a conscious trade-off for a tiny bit more performance. A CriticalSection would solve this when adding the objects to the Dictionary.

Share this post


Link to post
17 minutes ago, mitch.terpak said:

But the odds for it going wrong are very low

WTF.jpg.9ec797c8931c83e7eadc589c9338af05.jpg

Share this post


Link to post
1 hour ago, Kas Ob. said:

WTF.jpg.9ec797c8931c83e7eadc589c9338af05.jpg

 

1 hour ago, David Heffernan said:

hahahaha

 

1 hour ago, Dalija Prasnikar said:

Just a comment. This is not thread safe code. No amount of testing can prove that code is thread safe, you can only be lucky enough to catch the issue and prove it is not safe. But, just because it seems fine in your testing, does not mean it would never blow up.

Alright, I tested it in a console app it has a failure rate of about 1 / 35000 with 20 cores, that's too much. I wrapped adding new ThreadID's / ThreadObjects with a TCriticalSection. Thank you for making me reconsider my stubbornness.

  • Like 2

Share this post


Link to post
3 minutes ago, mitch.terpak said:

I tested it in a console app it has a failure rate of about 1 / 35000 with 20 cores, that's too much.

assuming these cores doing near nothing except your stress test, right ?

 

If you tried to open FireFox while doing such stress test looking for weak and missed situations to break, open it then try few tabs with Youtube playing videos, then see how that probability goes 100 times to 1/350, cores here will not help you as different application running on that server can easily make OS thread scheduler switch in different and biased way, this is the same case if you have your application running on server and then some like 2 Hyper-V guests booted on that device, it is doesn't worth to gamble.

The role is one gamble like this and literally your application needs to restart at best case scenario, while the worst it will cost money like lost hours of work or simply corrupted data.

Share this post


Link to post

Actual use case:

if not LoadflowObjectDict.TryGetValue(TThread.CurrentThread.ThreadID,LoadflowObject) then
            begin
               LoadflowObject := TLoadflowObject.Create();
// Assignment of new object
// Copy of a large array
// Setlength in the range of 10.000-50.000
// Move of a large array
// Copy of a large array
// Deepcopy of a 100-300mb object (object size in memory)
// Deepcopy of a 50-100mb nested object (object size in memory)

               CriticalSection.Acquire;
               LoadflowObjectDict.TryAdd(TThread.CurrentThread.ThreadID,LoadflowObject);
               CriticalSection.Release;
            end;


My test looked like this, note how it has nothing in front of the TryAdd that will introduce a lot of variance in the timings. The test was absolutely worse case. Adding the CriticalSection is quite insignificant. Just very noticeable on small problems.

program ParallelLoopConsoleApp;

{$APPTYPE CONSOLE}

uses
  System.SysUtils, System.Classes, System.Generics.Defaults, System.Threading, System.Generics.Collections;

var
  LoopCount: Integer = 0;
  ThreadDataDict: TDictionary<Cardinal, TThreadData>;

procedure PerformCalculation;
var
   CoreCount: Integer;
   LoadflowThreadpool : TThreadpool;
begin
   CoreCount:=TThreadPool.Default.MaxWorkerThreads div 2;
   ThreadDataDict:=TDictionary<Cardinal,TThreadData>.Create(1000);

   LoadflowThreadpool := TThreadpool.Create;
   LoadflowThreadpool.SetMaxWorkerThreads(CoreCount);
   LoadflowThreadpool.SetMinWorkerThreads(CoreCount);

   while True do
   begin
      Inc(LoopCount);
      Writeln('Loop Count: ',LoopCount);

      TParallel.For(1,100,
         procedure(Index: Integer)
         var
            ThreadData: TThreadData;
         begin
            // Store or update thread-specific data
            // This is not what actually happens though, before this other things happen that make it very unlikely for threads to be synced. But for arguments sake
            if not ThreadDataDict.TryGetValue(Tthread.CurrentThread.ThreadID,ThreadData) then ThreadDataDict.TryAdd(Tthread.CurrentThread.ThreadID,ThreadData); // Adding a criticalsection here would make it safe

            // Perform some calculation here
            ThreadData.CalculationResult:=Index * Index div 17 + 231 * 2 - 4;
         end, LoadflowThreadpool);

      // Cleanup and prepare for the next iteration
      ThreadDataDict.Clear;
   end;
end;

begin
  try
    PerformCalculation;
  except
    on E: Exception do
      Writeln('An error occurred: ', E.Message);
  end;

  Writeln('Press Enter to exit...');
end.

The reason I'm adjusting it now is because I think there's a quite significant collision chance on generating the bucket ID. Which can't be easily avoided. And I'd rather avoid this causing exceptions in a Docker using a DLL

Edited by mitch.terpak

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×