Jump to content

Kas Ob.

Members
  • Content Count

    313
  • Joined

  • Last visited

  • Days Won

    7

Posts posted by Kas Ob.


  1. Quote

    Local subprograms with references to outer local variables (OPTI6)

     

    This section shows nested local procedures, with references to outer local variables. Those local variables require some special stack manipulation so that the variables of the outer routine can be seen by the inner routine. This results in a good bit of overhead.

    I disagree with this statement, Delphi compiler uses the stack (indexed access) always when ran out of register with optimized enabled or disabled (in case disabled it always will depend on them), in case very simple local procedure then yes there might be an overhead due the need for register for the parent hence will switch for indexed stack access ( slower or lets say overhead), but in most cases local procedure are used to decrease complexity and increase readability, this infer the high local variables used then most likely the parent is already using the indexed stack access for most of its local variables, introducing the local procedure/function in this case might give the compiler the ability to use the register again specially in case moving loops to local procedures hence increasing the speed even with the few stack preparing overhead instructions,

    So i don't i agree with that statement, it is relative to some degree and based on the case of the usage, again with complex procedures with more than one loop or even nested ones, extracting some of the code to local procedure might be useful mostly for readability, decrease complexity, and might enhance the performance, but who knows how the compiler will behave, in many cases rearranging the local declared variables order can enhance the performance when the compiler use registers for the first encountered variables, hence using or switching the variables order might yield better performance but this will not help the local procedure with loops always, specially if it is was stack accessed to begin with, and the compiler lost the track or simply overwhelmed with variables or the code.

     

    Also lets not forget that parameters passed to the parent p/f are already in registers so the chance to lose registers usage for local variables is already amplified greatly.


  2. 2 hours ago, limelect said:

    1. while changing component sources DPR changes to

    requires>>rrequires

    contains>>ocontains

    end>>d

    I know this very same behaviour for many years, the chance to happen is great when adding/removing package or projects from projects group using Project Manager using drag and drop then removing, there other cases too which can't say for sure but renaming some shared unit between packages or projects while these projects are grouped might cause this corruption, confirmed on D2010 and XE8.

     

    And to confirm this has nothing to do with line endings or anything on user part editing the dpr, the projects is saved and not even opened in the IDE, the project is compiling fine, then something happen and the compiler point to "rrequires" in dpr ??, undo there on that dpr doesn't work too, IDE had broken parser there, even when it should not edit that specific dpr ad rewrite it in full with broken few keywords.


  3. I don't have the equipment to debug or deep dive in this subject, but from this

    1 hour ago, Dalija Prasnikar said:

    Some of the difference can also be explained with juggling memory between CPU and GPU on Windows as those other platforms use unified memory for CPU/GPU. However, my son Rene (who spends his days fiddling with GPU rendering stuff) said that from the way CPU and GPU behaves when running Skia benchmark test, that all this still might not be enough to explain observed difference, and that tiled based rendering on those other platforms could play a significant role. 

    I want to point something and hope it is not waste of time, 

    What about synchronizing objects here ? , how many are there ? what are they ? are they critical sections or something else? 

    See, returning data form GPU done by either blocking or waiting, but in fact in both cases internally they are hardware interrupts triggers, so the derivers plays a role and if there is a waiting mechanism.

    So, i would suggest to Rene to count these and if possible (of course in case there is many) on FireMonkey side to replace each and every one of them (aka patch someway or another) to be a spinlock, that spin with :

    1) try to spin for 50k-100k loop in place before reduce the spin speed by SwitchToThread or Sleep(0).

    2) tweak the range 50k-100k up and down and see the impact on FPS.

    3) Use Sleep(0) and SwitchToThread without spinning in place.

     

    also how many objects really are being sent to the GPU, (data or whatever), compared to haw many render call on FMX layer per object, a ratio would help here.

     

    There was a bug or inefficiency in AlphaSkins, where paining a control needed its parent and this caused the parent to render all of its control on it and this triggered many render with each paint escalating or amplifying the CPU usage on main thread and that made it stutter. 

     

    Here my AS demo it might help with more accurate or easier way to track this behavior with FMX

    CustomDraw.rar

     

    The is simple, these boxes should show 10 as 10 FPS objects to be rendered no more and no less.

     

    In all case sorry if this post is a waste of time.


  4. @Clément Try this non blocking refined code with leakage :

     

    unit uTCPPortCheckNonBlocking;
    
    interface
    
    uses
      Windows, Winsock2;
    
    function WSAInitialize(MajorVersion, MinorVerion: Integer): Boolean;
    
    function WSADeInitialize: Boolean;
    
    function CheckTCPPortNB(const IP: string; Port: Integer; out TimeMS: Integer): Boolean;
    
    var
      CHECKPOINT_TIMEOUT_MS: integer = 1000;
    
    implementation
    
    function CheckTCPPortNB(const IP: string; Port: Integer; out TimeMS: Integer): Boolean;
    var
      s: TSocket;
      Addr: TSockAddrIn;
      SAddr: TSockAddr absolute Addr;
      QPF, QPC1: Int64;
      NonBlockMode: DWORD;
      Res: Integer;
      FDW, FDE: fd_set;
      TimeVal: TTimeVal;
    
      function GetElapsedTime: Integer;
      var
        QPC2: Int64;
      begin
        QueryPerformanceCounter(QPC2);
        Result := (QPC2 - QPC1) div (QPF div 1000);
      end;
    
    begin
      Result := False;
      TimeMS := 0;
    
      s := socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
      if s = INVALID_SOCKET then
        Exit;
    
      NonBlockMode := 1;
      ioctlsocket(s, Integer(FIONBIO), NonBlockMode);
    
      Addr.sin_family := AF_INET;
      Addr.sin_addr.S_addr := inet_addr(PAnsiChar(AnsiString(IP)));
      Addr.sin_port := htons(Port);
    
      QueryPerformanceFrequency(QPF);
      QueryPerformanceCounter(QPC1);
    
      Res := connect(s, SAddr, SizeOf(SAddr));
      if (Res = SOCKET_ERROR) and (WSAGetLastError = WSAEWOULDBLOCK) then
      begin
        TimeVal.tv_sec := 0;          // 1 sec = 1000000 usec
        TimeVal.tv_usec := 1000;      // 1 ms  = 1000    usec
    
        repeat
          FDW.fd_count := 1;
          FDW.fd_array[0] := s;
          FDE.fd_count := 1;
          FDE.fd_array[0] := s;
          TimeMS := GetElapsedTime;
          Res := select(1, nil, @FDW, @FDE, @TimeVal);
        until (Res > 0) or (TimeMS >= CHECKPOINT_TIMEOUT_MS);
      end;
    
      Result := (FDW.fd_count = 1) and (FDE.fd_count = 0);
    
      TimeMS := GetElapsedTime;
    
      if s <> INVALID_SOCKET then
        closesocket(s);
    end;
    
    function WSAInitialize(MajorVersion, MinorVerion: Integer): Boolean;
    var
      WSA: TWsaData;
    begin
      Result := WSAStartup(MakeWord(MajorVersion, MinorVerion), WSA) = 0;
      if Result then
      begin
        Result := (Byte(WSA.wVersion shr 8) = MinorVerion) and (Byte(WSA.wVersion) = MajorVersion);
        if not Result then
        begin
          Result := False;
          WSADeInitialize;
        end;
      end;
    end;
    
    function WSADeInitialize: Boolean;
    begin
      Result := WSACleanup = 0;
    end;
    
    initialization
      WSAInitialize(2, 2);
    
    
    finalization
      //WSADeInitialize;
    
    end.

    the result

    image.thumb.png.af1299c803de37258e733602108db42f.png

    and that is clean WireShark view, don't you think ?

    also the timing is very close the ICMP ping, adjust CHECKPOINT_TIMEOUT_MS, in the above code it is 1 sec.

     

    Now to the real question because i still don't understand you are you trying to achieve, see, i will put my logic circulating in my brain, and please either correct me ( show me what i am missing ) or try be convinced with different approach.

     

    Trying to utilize TCP as ping somehow, is the point is to keep the connection alive then this will not help at all you need to

    1) Use the same TCP connection you are trying to keep alive, meaning adding specific protocol Ping/Pong between you Client/Server (explained earlier),

    or you don't need to keep a connection alive you need to check server availability on specific port, then fine, you need to 

    2) A simple Connect and like (1) or, connect and then disconnect like the samples above, but in this case you don't need to send anything or wait to receive, so sending zero length data is like calling for undefined behavior or unknown consequences, like as i mentioned firewalls interactions or in this case if PERM_SACK is always there and signaling the peer (server) is it OK to selectively send ACK, then server after connect and ACK the connection it might hold on ACK after that zero length data packet, because it is within its right, keep it as much as 300ms may be, while answering your packet with another zero packet hence your client received an answer but didn't receive ACK for the send, this will cause confusion.

    3) you don't want to just detect listening port ( aka open port and port scan style), you want to check the server logic part is answering requests then simple connect and disconnect is not suitable and you need to 

    4) Connect and send something not zero length to avoid (2) then we returned to (1) or (2) again because of (3)

     

    This loop that is bugging me.

     

    Anyway the code in this post is working fine and it is non blocking and will detect running server on specific TCP port, if you want to send and receive then the change is minimal:

    Repeat the select loop twice, the first to check readiness for write, the same loop above, then perform send, then another loop but for read instead of write, you need FDR just like FDW without FDW, you can use the same variable, after that you perform recv then close, do not call shutdown, and to be perfect then another loop after recv to check for read same as the first loop, and that it is.

     

    Hope that helps.


  5. 6 hours ago, Remy Lebeau said:

    You are leaking the socket handle if connect() fails.  You have to close the handle if socket() succeeds, regardless of whether connect() succeeds.

    Thank you for pointing this.

    6 hours ago, Remy Lebeau said:

    You can't reduce (or even set) the timeout on a blocking connect().  You have to use a non-blocking connect() and then use select() or equivalent to implement the timeout.

    Yup, you are right there as i can't remember trying this on blocking connect and it seems Windows still doesn't support TCP_USER_TIMEOUT even it was RFCed since 2009 https://datatracker.ietf.org/doc/html/rfc5482

    Thank you again.

    6 hours ago, Remy Lebeau said:

    AFAIK, you can't disable SACK on a per-socket basis.  You have to use the Registry or command-line netsh utility to disable it globally:

    SACK can be disabled per socket, that i am sure of, and you can test it, but from what i can see PERM_SACK is always globally available either disabled or enabled, PERM_SACK does allow the peer to use and utilize SACK.


  6. @Clément Try this please:

    unit uTCPPortCheckBlocking;
    
    interface
    
    uses
      Windows, Winsock2;
    
    function WSAInitialize(MajorVersion, MinorVerion: Integer): Boolean;
    
    function WSADeInitialize: Boolean;
    
    function CheckTCPPortBlocking(const IP: string; Port: Integer; out TimeMS: Integer): Boolean;
    
    implementation
    
    function CheckTCPPortBlocking(const IP: string; Port: Integer; out TimeMS: Integer): Boolean;
    var
      s: TSocket;
      Addr: TSockAddrIn;
      SAddr: TSockAddr absolute Addr;
      QPC1, QPC2, QPF: Int64;
    begin
      Result := False;
      s := socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
      if s = INVALID_SOCKET then
        Exit;
    
    
      Addr.sin_family := AF_INET;
      Addr.sin_addr.S_addr := inet_addr(PAnsiChar(AnsiString(IP)));
      Addr.sin_port := htons(Port);
    
      QueryPerformanceFrequency(QPF);
      QueryPerformanceCounter(QPC1);
    
      Result := connect(s, SAddr, SizeOf(SAddr)) <> SOCKET_ERROR;
    
      QueryPerformanceCounter(QPC2);
      TimeMS := (QPC2 - QPC1) div (QPF div 1000);
    
      if Result then
        closesocket(s);
    
    end;
    
    function WSAInitialize(MajorVersion, MinorVerion: Integer): Boolean;
    var
      WSA: TWsaData;
    begin
      Result := WSAStartup(MakeWord(MajorVersion, MinorVerion), WSA) = 0;
      if Result then
      begin
        Result := (Byte(WSA.wVersion shr 8) = MinorVerion) and (Byte(WSA.wVersion) = MajorVersion);
        if not Result then
        begin
          Result := False;
          WSADeInitialize;
        end;
      end;
    end;
    
    function WSADeInitialize: Boolean;
    begin
      Result := WSACleanup = 0;
    end;
    
    initialization
      WSAInitialize(2, 2);
    
    
    finalization
      //WSADeInitialize;
    
    end.

    This is blocking mode, and you can reduce the timeout before connect to 1 second,

    And i really angry now as i can't disable SACK for connect !, i not only sure it was disabled for connect but i had to enforce enabling it with my 5 years old project when i needed to deep analyze FileZilla SSH fast behavior and replicate it with my code, back then i found SACK played huge role in achieving stable 8MB/s of file downloading over 100mbps connection between Ukraine and France with RTC and SecureBlackBox.

    But now with my current Windows (10.0.19045.4291) it is enabled and can't be disabled as i can see, back then my Windows 10 was version 17something.

     

    Anyway: the code above is simple stupid but will do the will perform fast without the need to send or perform recv it will detect if remote port is listening and i think this is what you are looking for.

    This is my result with google IP 

    var
      PortScanRes: Boolean;
      Time: Integer;
    begin
      PortScanRes := CheckTCPPortBlocking('142.251.36.46', 80, Time);
      Memo1.Lines.Add(BoolToStr(PortScanRes, True) + '  ' + IntToStr(Time));
    Quote

    True  42
    True  41
    True  42
    True  42
    True  41
    True  42
    True  41
    True  42

     


  7. 9 hours ago, Clément said:

    I must be messing the connection shutdown / close

    No, you are not !

     

    I cannot compile your code, but have to point that SACK (Selective Acknowledgements) is enabled and this option might solve your problem image.png.afe2540ff3db0c816cc67c333caeec1a.png 

     

    Disable delayed ACK like Remy suggestion https://stackoverflow.com/questions/55034112/c-disable-delayed-ack-on-windows

     

    But away from that, i highly recommend to use Select before any read or receive or even close, select is your best friend and guidance for these ACK, select will trigger sending pending ACKs.

     

    Now to the real cause : you have to know this very important fact about shutdown( , SD_SEND ):

    It will send FIN and wait for ACK but will with high probability prevent sending any further traffic even ACK !!!!, this behavior with ACK has being changed over many Windows versions and it is different from OS to OS.



    Also your Wireshark capture is is only for the client, this is half useful, you will get the better picture by capture the traffic in both side, so the ACK was an acknowledgment for what exactly ? notice the failed one and retransmitted is the the ACK with SEQ=1 so the failed to sent is the FIN not the ACK, the packet before the last one.


  8. I do understand, and it will work.

     

    Just handle an error after that send to exit the loop, and i want to add this fact:

     

    The TCP peer that does close the socket will trigger TIME_WAIT on the remote side, TIME_WAIT is absolutely harmless and in fact it is a good thing, just don't panic if like in this case on server side you see them accumulate.


  9. Hi @Clément ,

     

    Well, i couldn't let it go, as that loop is stuck in my head and it need fixing.

     

    Firstly and as a rule of thumb in this era always disable Nagle algorithm TCP_NODELAY, even if Windows says it will be disabled by default https://learn.microsoft.com/en-us/windows/win32/api/winsock/nf-winsock-setsockopt

     

    Secondly :

    That "While not done do " loop is confusing, what is the point of it ? it is acting as blocking or non blocking ?

    I mean what will happen if Send retuned an error? then you are spiking the CPU for 200ms with that Sleep(0) over erroneous/broken socket, also it is acting as blocking, then the logic sense is to replace it with more reliable approach.

     

    And here the suggestions :

    1) Don't Send a zero length TCP (or UDP) packet !, yes it works for now, but trust me there is many firewalls with different flavors and smells, and they will block that as DoS, and it is rightfully they do.

    2) Is your 0 length for saving traffic ? if so, then its make less sense, the the packet header for IPv4 and the TCP is over 44 bytes and most likely will be 56 bytes, so send something, if you are checking your own server then send the current time (current tick) and let the server returned it, thus you will have the round trip measured in ms, in my projects i do have similar ping but i do define the protocol as (2+1)*4 then (3+1)*4 bytes, client send its time and the last received server time, the server will add its current time and last received client time and return it, hence both will have live delay, specially for the server it can map the clients latencies. (+1 is for the ping and pong header and its option)

    3) Why creating the socket each and every time, create once and reuse it, i always use the same one for the real data traffic, if you are pinging a HTTP server then perform a small request and let the server response with minimal response.

    4) Before sending it always better to check for readiness for write, aka writability, so in your code above you can perform select() then and only then perform the send.


  10. 2 hours ago, david_navigator said:

    @Kas Ob. could you explain in a little more detail about the memory manager please ? 

    Have a look at this HeapMM 

    https://github.com/maximmasiutin/FastCodeBenchmark/blob/master/MMv1_BV113/HeapMM.pas

     

    This is more than sufficient for your need case, only you must understand that the way you will use the DLL is similar to the way you intended to use the separated process,

    in other words : be careful ! there is no memory sharing for managed types !

    So no string or arrays passing (among other managed types) between your exe and that dll !

     

    Last thing, there is this directive in that HeapMM

    {$DEFINE USE_PROCESS_HEAP}
    This should be disabled
    {.$DEFINE USE_PROCESS_HEAP}

    By disabling it, the dll will be using its own heap and will free it so if it does leaks memory it will be erased on unloading the library.

     

    Also note this: if you are going to run multiple scripts concurrently then this will the DLL approach will be challenging, so either let them run and every now and then unload the library (FreeLibrary), or you can copy the DLL with different names and loading them separately, just food for thoughts.

     

    ps: understanding how memory manager works is nice to know, the the heap one is the most minimalistic form for a memory manager.


  11. 2 hours ago, david_navigator said:

    If I made the scripting engine a stand alone EXE then whatever the user wrote, Windows would clean up when the exe closed.

    Yes, this is the only way, also you can make it a DLL, only make sure to not use a shared memory memory manager with the exe.

     

    Use the OS heap API as local memory manager.


  12. 17 hours ago, Brandon Staggs said:

    high-entropy ASLR will more quickly expose those errors

    No, will not.

     

    Because :

    1) for 32 bit it will stay on 32 bit and will not hit the signed not signed range anyway, hence it will still working.

    2) for 64 bit: well same as above !

     

     

    Now back to this in whole

    17 hours ago, Brandon Staggs said:

    If your code assumes that a pointer is an Integer (likely due to legacy but still invalid assumptions from a decade or more in the past), high-entropy ASLR will more quickly expose those errors  when compiling for 64-bit. That's what I was referring to.

    You are hitting different village here, what are you talking about or referring to, are stuff belongs to runtime pointer operation, at run time calculated :

    1) These bug are exposed by FastMM with high memory allocation, there is a setting in FastMM4Options.inc called AlwaysAllocateTopDown, this will catch or at least help in catching such bugs.

    2) No matter what your code is doing with pointers the code will stay at its place and it is where the image had being loaded, (not talking about runtime generated code like JIT or any), we are talking about default a process started by executing an exe, and we assuming the compiler is doing right job, the EIP will not leave the designated as execution pages when the image had being loaded by the OS, now if we moved these pages up by address, meaning we pushed the loading address 100mb or $F000000, will it fail ?

    if there is relocation table then it will not fail, if there is no relocation table, then the OS will revert to the default which most likely $400000 and call it a day.

     

    16 hours ago, DelphiUdIT said:

    Like exe, the DLL have a field in the IMAGE_OPTIONAL_HEADER64 - DllCharacteristics that enable or not the ASLR. In effect lot of DLLs are not ALSR compatible (especially the old ones). And they should works whenever ASLR is ON or OFF.

    ASLR merely load the image at different places as i said, and here i want to list what it does exactly for last time

    1) Load the executable binary/image at random address, in the old days it was always the same and it comes form the PE.

    2) Make the stack allocation at random, without it and like the old age, the stack like for the main thread was always at the same address, this is very observable even now with a debugged process, the stack position between two runs, and that because (3) :

    3) Make memory allocation at random, this is second most important change as a process calls VirtualAlloc at beginning of its start without ASLR enabled most likely will have the same address, ASLR will prevent that.

    Now, in both case id a DLL does support ASLR or not it will comply with relocation table to be loadable, duh, and this is most of the ASLR critical point, the rest is : it does expect a memory allocation or a position at the stack to be the same every time, i never saw a Delphi generated code have that, hence all Delphi DLLs are ASLR ready ! , and they will work fine.

    For EXE it might be different but only for the image loading addresses, but it will also be ready for the randomization of the stack and allocations.

     

    This is a nice article about ASLR when it failed to be ASLR https://insights.sei.cmu.edu/blog/when-aslr-is-not-really-aslr-the-case-of-incorrect-assumptions-and-bad-defaults/

     


  13. 1 hour ago, Brandon Staggs said:

    In fact, ASLR seems like a great way to make badly written code break during testing rather than only rarely in production...

    Well, this need some explanation.

     

    1) ASLR has nothing to do with code, well written or bad one.

    2) ASLR is merely randomization of system allocation memory for this process, so no address can be guessed per execution.

    3) Badly written code can violate ASLR, but this is not bad code, it is code that runs out side the control of the compiler, aka override it with fixed address, such code will fail, ASLR in fact being made for this case, now if the code of the executable is abusing the addressing scheme then it will fail, beyond that, only malicious and blindly (or remotely) injected code will fail to run as intended, because there is known addresses before hand.

    4) Windows executables should support ASLR by simply have right and well defined relocation table in PE.

    5) By default all DLLs should and must be ASLR compatible because by definition and form the beginning of Windows OS and its API, the DLL loading address can't be known before hand, and can't be fixed, yet for many years and many versions, Windows loaded specific DLLs to the same addresses.

    6) for EXE to support ASLR it must have Relocation Table https://0xrick.github.io/win-internals/pe7/ without it it will fail if ASLR is enforced by the OS, this is historical problem as for long time the loading address for EXE on windows was $400000 by default so many linker ( and compiler) didn't follow the rules and generated right relocation table, or even worse, there is many tools to minimize your EXE size that does strip that table rendering the EXE not ASLR compatible.

    7) to my knowledge Delphi generated binaries relocation table was correct and right and was there, and should be a problem unless there is regression somewhere.

     

    Again ASLR compatibility is Linker responsibility, but yet the compiler could generate code that confuses the linker and make it fail to build right relocation table.

     

    ps: Relocation table in short words, it is a table that the system will parse and patch (change addresses) after (and while) loading the PE image sections into the memory based on the relative addresses in that table to the addresses that OS picked, and that before giving the new process the execution in case of EXE, and while blocking LoadLibrary and before calling DllMain.


  14. 2 hours ago, Stefan Glienke said:

    Also please let's get the terminology right -.....

    
    TProc = refererence to procedure;

    This is a method reference type - yes, even the official documentation is a mishmash.

    Yes, right on the point.

     

    This is very confusing, as there is anonymous methods and there is anonymous procedure/function.

    These are different things, methods are aware of self and it is a must to have, while the latter don't have self.


  15. 33 minutes ago, Clément said:

    If I remove the path or rename the unit the project doesn't compile.

    I think i can pin point it !

     

    Just remembered another situation when the project wasn't using specific 3rd party library library, namely AlphaSkins and acPng dcu kept popping in compiled while it wasn't referenced in that project, the project wasn't even using AlphaSkins but the standard VCL TImage, this happen on my IDE and my PC not the client (owner of the source), i solved it by simply disabling or removing that package form the IDE when working on that project.

     


  16. I witnessed such behavior, in few times was not a pas unit but in dcu file referring to some ghost unit that i don't have reference in the code.

    And it was a removed referenced unit after refactoring, so searching for the name of needed unit will return no result, while the dcu still having the reference. 

     

    Try to rename the folders with your DCU's, also when it dcu tracking then Process Monitor will show some extra information about this.


  17. 24 minutes ago, Rollo62 said:

    what made you think C++ fails to be the best language.

    As i said because it failed so many times to guarantee better and safer code practice, as we read so many times,

    so is it the best?, yeah it is,..

    is it suggest to continue to be used in most critical software ?

    here comes the answer : NO ,

    so is there better solution ?

    Yes and they calling to skip it to something better and more safer, not ... again not because it can't deliver, but there is a problem with the human factor and they gave up on fixing it.

     

    Don't take my opinion on the subject, and think about this

    Can all the big companies wrong as they call to drop the language and switch to something else ?

     

    And please remember that what ever you might suggest or even imagine, is already had been done and tried with C/C++ and still failed again and again.

     

    Rust doesn't have that much of unique as language except key features which limit the developer to design as he might imagine or wish, and involve the compiler even more, for Delphi (though) this might trigger many here, smart pointer will be wrong direction and will not solve anything, not just because Embarcadero might fail to deliver, or waste of time and resources, but because with all the libraries in C/C++ and still failing, so why repeat the tested and expect different result,

    please don't take this as debate for now about this subject, it is what one can deduce from what is happening now.

     

    Delphi will be many folds safer with removing or limiting the memory handling by code and give that to the compiler, same as Rust, if there is no shuffling then we don't need smart pointer or stupid ones, it is so simple, will this hinder our coding design, may be it will, but most likely it only need more code, though i doubt it will be longer than it is now, in all cases it will ensure better design and structure, a hardened and sound one.

    Imagine one directive at the top of the unit Delphi file that make all the variables and fields initialized and compiler and compile time managed, the compiler will stop you until fix them all, breaking zero backward code (legacy), and ensure the future is brighter and better, and the transition will be easier with compiler error and warning, and no smart pointers with stupid code that trying to retrieve few milliseconds in performance while different approach could be faster at the root. 


  18. 2 hours ago, Rollo62 said:

    I would disagree, at least from my point of view.
    I used try-catch in C++ on a regular base, even years ago, and find them even more flexible and more clear to use than try-finally-except.

    Fair and with in your right, but let me ask you a simple question 

     

    When you last time saw Delphi/Pascal code where a function had one letter name like f or g ?... 

    Yesterday i watched this 

    I believe i pasted the link as Stefan put it in the German forum 

     

    The talk is great and very valuable, and if anyone is not familiar with lecture then know that this dude wrote C++ specification for breakfast, and here it comes write the guide lines for safety, yet when i watched it i saw that he himself with his examples violate a rule that always use expressive naming, and yes there was f and g, this comes from math, and the obsession of C++ developer with math to look cool, everyone looks to Bjarne and his talks and books, yet the beginners will scratch these examples in their brains forever with one letter function as name.

     

    Look at this "New Rules" titled in 2020

    https://devblogs.microsoft.com/cppblog/new-safety-rules-in-c-core-check/

     

    I think every C and C++ compiler out there does warn about default for a switch, yet that blog post deemed it a must to remind everyone with it, in Rust the code will not compile and C and C++ and in %90 no one will see it because there 458648567 another warning, some of these warnings makes sense but most of them don't, C and C++ are cursed with arrogance and stubbornness and its culture is beyond control, (my opinion)

     

    Rollo62 all what i am saying is even with all of this power of C++, it fails be the best language for long term security and safety, they (the most invested companies and teams and i trust they have better insight and view on this subject ) said C/C++ will not cut it and Rust it is. they decided to burn the tower and throw the power tools for better language, that (again) doesn't bring any thing new, on the contrary only remove the ability to use many C++ features, and evidently they see it as success story.

    • Like 1

  19. One more thing, if it was easy to define and put finger on memory safety issues in C++, then trust me, no one would have discussed memory safety at this scope and this range of popularity.

    Each projects and each company has its own strict guide lines for safety and integrity of code, yet it is so clear that they are still short and still fail.

     

    In movies they say the villain should be masterful without any mistakes every time, while the good detective need to be lucky once.

    With security this rule is reversed the good guy should not make a simple mistake in thousands of code, and the villain need to be lucky to found one mistake.

     

    Delphi/Pascal is easier to produce memory safe code and executables.


  20. 43 minutes ago, Rollo62 said:

    I get your point, but it's maybe an issue of wrong usage of terms here.
    When talking about memory-safety: Delphi is NOT
    When talking about: system-vulnerability-safety (or the like): You could probably argue that it's better than C++, closer to behaviour as JS, but I doubt that too.

    Its not that only.

     

    See, we the Delphi programmers used to do things in specific way, this out of the structure of the language of Pascal and by tradition or best practices, it goes like (as example ) automatically without thinking to put try..finally then call free, C++ there is none like this, but i am not saying C++ is incapable of, on contrary it can do more things because it is more powerful, but C++ developers doesn't have the discipline of Pascal's, this is exactly why all these big companies with their top tier developer keep doing or falling with the same traps again and again.

     

    When these big companies suggest to switch to different language like Rust not out of powerful of Rust or the lack of the tools in C++, on the contrary they reached to a conclusion that too much power is simply harmful, more limited tools are better fit and more secure.

     

    When you say Delphi is not memory safe, yes it is, just like any other language, yet the guide lines for Pascal and Delphi are way simpler and clearer than C++ to produce memory safe code (either bugs or vulnerability) , Rust is similar, its guide lines are harder to break as they incorporated within the language syntax itself.

    In Delphi if you leave the Overflow Check at on all the time, then you really had to be making big mistakes to breach memory safety for something like accessing index memory or handling managed types, these managed types specially arrays and strings are managed by the compiler in very nice way, in C++ there is hundreds of implementation of such arrays and strings, and they all will depends on some implementation, yet while they are completely safe at the same type coming form the same library, they are useless when it comes to interact with different library or the OS or whatever and here comes the problems of casting and unsafe converting.

    • Like 1

  21. As Lajos said, if they are only cache and will not be needed after a restart or to be shared with different process, then put them in system temp folder and don't share writing, as for hashing 44Mb is nothing to worry about, but you can build a table to hash parts like every 64kb this way you will check that part when you read (need) it.

×