Jump to content

A.M. Hoornweg

Members
  • Content Count

    446
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by A.M. Hoornweg


  1. On 11/29/2023 at 4:55 PM, Marsil said:

     

    
    while not StopSearch do
      begin
        ...
        var FileRec : TSearchRec;
        GetData (FileRec);
        UseData;
        ...
      end;

     

    tSearchrec is a record that contains a managed type (the file name is a string). 

     

    When managed types go out of scope, the code generated by the compiler checks if they are no longer in use and that any heap memory they occupy is freed properly. That costs time.  

    On 11/29/2023 at 4:55 PM, Marsil said:

     

     

     


  2. You misinterpret my idea.  OP merely used power() and rounding to demonstrate that 32- and 64 bit code do not have identical FP accuracy.
     

    Unfortunately people think in decimal digits and want those represented and rounded accurately. I simply recommend to use a fixed point numeric variable type for storage if this is the purpose and if the number of decimals is <=4.   

     

    A soon as there is a FP calculation involved there is the risk that the result (a double, single or extended) isn't an exact representation anymore. But still, replacing "double" by "currency" in OP's example produces the expected result in both 32 and 64 bit mode.

     

     

     

     


  3. I wouldn't. 

     

    But I certainly do have a use for fixed comma numbers that must be compared for exactness.  As in being part of the composite primary key of a database table. So I cherish the fact that this numeric type can be tested with exactness. 


  4. If the number of decimal places is not greater than 4, OP might consider using the currency data type.  Currencies are exact fixed-point numbers having four decimals after the dot so a value like 1.015 can be stored with perfect precision.   Adding, subtracting and comparing currencies always gives an exact result.  Internally, they are 64-bit signed integers with an implicit divisor of 10,000.

     

    [edit]

    functions like "power" return a double or extended. Prior to comparing the results of such functions, store them in currencies.

     

    • Like 1

  5. OP needs 300 billion bits or so he said. 

     

    I assume that collecting that amount of data takes a long time and that the data will be evaluated to obtain some sort of useful result afterwards. So the program must run for a long time and may not be terminated prematurely or else the contents of the tBits are lost and everything has to start over again.  

     

    That sounds like a very time consuming development/debugging cycle that can be avoided by splitting up the program into an acquisition process and an evaluation process that share a common block of virtual memory.

     

    The acquisition part can run "forever" in the background, it can be straightforward because it needs not evaluate the results. It just supplies live data (bits) in memory that become more complete over time.  The evaluation part can be a work in progress that can be comfortably debugged, refined and re-compiled. It can observe "live" data that become more complete over time.

     

     

     

     


  6. 23 minutes ago, David Heffernan said:

    It's obviously more complicated for the programmer to use MMF than plain memory. Unless there was a benefit to using MMF then there's no point in taking on that complexity.

    Sure. But OP has to re-write tBits anyway because of its size limitations.  I'd advise him to make the allocation/deallocation methods virtual to keep all options open.  


  7. 55 minutes ago, David Heffernan said:

    But that isn't what this topic is abkut. I can't see anything in this topic that suggests that MMF would add anything over plain memory. 

     

    Or have I missed something? 

    David, OP can just use whatever allocation method pleases him. The end result is a pointer to a memory block whatever method he uses. It doesn't make one version worse than the other.


  8. On 7/4/2023 at 11:50 PM, David Heffernan said:

    These huge blocks won't be allocated by fastmn. They will be passed through to VirtualAlloc. But we aren't talking about a chunk of memory greater than what the machine has physically anyway. 

     

    I'm rather sceptical that MMF would ever be appropriate here. 

    In the simplest case a MMF is just a block of bytes, why would that be inappropriate? Just because it's allocated by a different API?

     

    We happen to use them extensively for data acquisition, they have that nice little feature that allows us to share the same buffer between processes. One process collecting data, another evaluating it. There are tons of use cases for that. 


  9. 14 minutes ago, David Heffernan said:

    Doesn't memory already handle all of this? I mean, the paging system does all of this already surely? 

    I have never tried to allocate such enormous amounts of memory using Delphi's heap manager (Fastmm4), I really don't know how it behaves if you try to allocate one huge chunk that is bigger than what the machine has physically.  The documentation says "For win32 and Win64,  the default FastMM Memory Manager is optimized for applications that allocate large numbers of small- to medium-sized blocks, as is typical for object-oriented applications and applications that process string data. "

     

    MapViewOfFile() bypasses the Delphi heap completely and leaves it up to Windows to map a contiguous block of virtual memory. 

     


  10. 16 hours ago, David Heffernan said:

    Why would a memory mapped file be remotely helpful here?

    For size reasons. Memory mapped files let you

     

    - use contiguous arrays bigger than available RAM, mapping file-backed data into virtual 64-bit address space 

    - use simple pointer arithmetics to access individual bytes, it is dead easy to implement SetBit() GetBit() etc 

    - let the operating system's cache algorithm handle the intricacies of swapping pages in and out (LRU/MRU etc) 

    - benefit from the speed and low latency of modern SSD's 

    - have the data on disk, ready to be re-used

     

     

    Speed-wise this is only an option if the operating system can minimize swapping so accessing the elements shouldn't be totally random. If the probability of accessing an element is some kind of bell curve then it might just work.

     

    [edit] typo.

    • Like 1

  11. On 6/30/2023 at 7:17 PM, Remy Lebeau said:

    I question the design of the code shown.  I would have opted to implement it more like this instead:

    
    procedure AnalyzeException(const E: Exception);
    begin
      // ...Do something with the object ...
    end;
    
    procedure TestCase (ReRaiseException {or: SwallowException}: Boolean);
    var
      OneThousand, Zero: integer;
      d: double;
    begin
      Zero := 0;
      OneThousand := 1000;
      try
        d := OneThousand / Zero;
      except
        on E: Exception do
        begin
          AnalyzeException(E);
          if ReRaiseException {or: not SwallowException} then raise;
        end;
      end;
    end;

     

     

    Remy, this was just a tiny test case to reproduce the memory leak caused by the non-functionality of ReleaseExceptionObject. 

     

    I was just trying to get a better understanding of the inner workings and lifetime management of exceptions. One such exercise was to detect if an exception was active (hence not passing it as a parameter), then to analyze it (and log the information somewhere) and to conditionally re-raise it based on the information gathered.  Ideally I'd like to hook into the system before "except" even fires.

     

     

     

     


  12. Hello all,

     

    I have the impression that Sysutils.ReleaseExceptionObject is not working as advertized.

    I see that it's just a dummy procedure. Please find a testcase below.

     

    Calling TestCase(True) causes a memory leak whereas TestCase(false) works correctly. The memory leak is gone when I free the exception object manually.

     

    
    procedure AnalyzeException(ReRaise: Boolean);
    var e: Exception;
    begin
      e := Exception(AcquireExceptionObject);
       // ...Do something with the object ...
    
      if ReRaise then
        raise (e) at ExceptAddr
      else
        ReleaseExceptionObject ; //This should work but is a dummy procedure...
        //FreeAndNil(e);  --> this works though
    end;
    
    
    
    Procedure TestCase (SwallowException:Boolean);
    var OneThousand, Zero: integer; d: double;
    begin
      Zero := 0;
      OneThousand := 1000;
      try
        d := OneThousand / Zero;
      except
        AnalyzeException(SwallowException);
      end;
    end;

     


  13. On 6/15/2023 at 5:14 PM, David Heffernan said:

    Not on all delphi versions, IIRC. If you want a 32 bit integer then there are types for that. FixedInt and FixedUInt. 

     

    The whole point of Integer is that it maps to the platform's C int. 

    Frankly, I have never used FixedInt/FixedUint and probably never will.  The prefix "Fixed" does not convey any information, they might as well have called it TweakedInt. 

     

    [edit]

     

    I see in unit system.pas that integer types Int8, Int16, Int32, int64, uint8, uint16, uint32, uint64 all exist. If we want a fixed 32 bit integer, we can specify that concisely.

     

     

     


  14. Changing all incorrect pointer/integer casts is infinitely easier because the compiler tells us that the typecast is invalid at compile time.   

     

    [edit] Having said that, I don't usually use the "tag" property of tcomponent etc anymore.  It is much more convenient to use a tDictionary<tObject,tSomethingelse> to figure out if an object is associated with something else.


  15. 2 minutes ago, Brandon Staggs said:

    I cannot fathom the paradigm shift that would be. Well, I can fathom it, and I think it would be Delphi 8 all over again for them to introduce such a massive breaking change in concept and code. It would be nice if they could isolate the design-time packages enough to protect the IDE, but removing them from the IDE would eliminate one of Delphi's few core advantages over other systems. Whatever they change it to would also require the participation of everyone making packages, which is not going to happen.

    One way to achieve isolation would be to load each package into a separate helper process. And yes, it would be a monumental breaking change.


  16. 1 minute ago, dummzeuch said:

    That's not restricted to the Delphi IDE and packages though. Basically any DLL loaded into a process can do the same to that process.

    (I'm not sure about dotNET in that respect though.)

    Of course.  But I've seen many designtime packages of poor quality.  And certain packages that I have installed inn Delphi 11.3 cause the IDE to throw an exception when I close it. 

×