Jump to content

David Heffernan

Members
  • Content Count

    3480
  • Joined

  • Last visited

  • Days Won

    171

Posts posted by David Heffernan


  1. 3 hours ago, msd said:

    Google Gemini (please don't even give it a try).

    ChatGPT 3.5: Correct Translation

    ChatGPT 4.0: Pro Translation

    This is pretty epic, let's be honest. Take all the drudge out, and let us work on the brain stuff.


  2. 1 minute ago, David Schwartz said:

    So what DO you do in a case where, say, you might use an object to collect an accumulation of data that is provided by multiple sources? 

     

    It's often done for contextual state management over time...

     

    That's kind of a vague specification. For instance, is the data pushed or pulled? I would imagine that makes a difference. 


  3. 50 minutes ago, David Schwartz said:

    I encounter errors in (1) when the code is rather long and I declare an object but forget to add the Free to the Finally clause.

    I don't really understand this. I always write the try/finally immediately after the construction, and always bang the Free in immediately. Then I fill out the body. It's just a habit that you form so that you don't make such mistakes. And honestly, this is never one that is hard to debug because you just have a leak. And presumably you use leak detection tools so that you'd always find them immediately.

    52 minutes ago, David Schwartz said:

    For (2), I'd love to see what your approach is. I've not found an approach that is robust and works well everywhere, other than something that takes a lot of extra code to check things everywhere.

    I don't really understand this scenario either. If you have a reference to something that may or may not exist, you would test Assigned() before using it. And when you were finished, you'd set the reference back to nil once you'd destroyed it.

     

    The scenario that is tricky is when you have multiple references to an object.

    • Like 6

  4. 36 minutes ago, David Schwartz said:

    I'd like this because chasing down orphaned blocks and references to objects that got deleted early seem to take up more of my time than any other types of debugging issues.

    It's odd you say that, but I never have to debug issues like this. There are two simple patterns for lifetime and they are so ingrained, nobody ever makes a mistake in my team.

    • Like 4

  5. 10 minutes ago, David Schwartz said:

    However, I seriously doubt they're using Delphi to do so.

    Yeah, you are wrong. Such people exist. I am one.

    10 minutes ago, David Schwartz said:

    Likewise, I'm confident that there are plenty of problems that can be written without pointers and bit-whacking, but if you want to drop down to that level, then the program will definitely run much faster. 

    Not necessarily. No reason why pointer arithmetic should be faster than, for example, plain indexing of arrays.

    11 minutes ago, David Schwartz said:

    And if you go all the way down to Assembly language, they'll run even faster.

    Again, good compilers are often better than humans.

     

    13 minutes ago, David Schwartz said:

    Sure, pointers are heavily used in the RTL and common libs that I think SHOULD be highly optimized. But the people responsible for them are the library vendors, not end-users.

    I don't think pointers are used in the RTL especially more than other libraries, and I don't think pointers are used there fore for optimisation and performance.

     

    As far as this whole memory safety debate goes, you can't exclude the RTL from it. That code executes in the code that is subject to attack.

    • Like 2

  6. On 4/4/2024 at 3:39 PM, msd said:

    therefore, I need someone with expertise in Delphi to convert those libraries to native Delphi Pascal files

    I'm not quite sure what you are saying here? Are you looking to hire somebody?


  7. 6 minutes ago, JonRobertson said:

    Such as weird UI issues that do not occur in 32-bit but do occur in 64-bit, which I've encountered more than once

    Unlucky for you. I've so far never encountered such a problem.

     

    The type of thing that forces me to debug 64 bit is when it's my DLL hosted in a 64 bit process, and I don't have a 32 bit version of the host.


  8. 20 hours ago, A.M. Hoornweg said:

    If you'd ask me to attempt a thing like that, I'd first look for a suitable bidirectional RPC framework. Something having capabilities like COM.  COM was used by IDE's such as Visual Basic to host (in-process) VBX controls but it works across processes as well. The "helper processes" would only need to contain the designtime part of the components, at runtime they do nothing.  Such an approach would move the design time components out of the address space of the IDE.  Also, the "bitness" of the helper processes and the IDE would be independent from each other.

    How are you going to have the components paint themselves on the design surface?


  9. 14 hours ago, JonRobertson said:

    I still have problems using the "integrated" (sort of) debugger with Win64 projects.

    Win64 debugger is known to be terrible. Perhaps slightly less so with more recent versions. I always debug 32 bit if at all possible. But sometimes you have a scenario where that's not possible. Unlucky if you do.


  10. 3 hours ago, A.M. Hoornweg said:

    My preference would be to have the component packages in separate helper processes.

     

    I can imagine this working fine for non visual components but what about components that paint themselves on the design surface? Cross process window hierarchies are basically untenable. 


  11. 13 hours ago, dennis12 said:

    Delphi memory manager is very fast and for small block allocation it has some parallelism built in.

    Automatic stack allocation is faster than all heap allocators, and has no thread contention. And Delphi's current heap allocator is known not to be scalable.

     

    I've measured this in a real world setting. Avoiding heap allocations when converting numbers (integer and real) to text makes a huge performance difference when creating files, e.g. XML, JSON, YAML. Even when single threaded. When multi-threading the impact is even greater, if using the default heap allocator. I personally use isolated per thread heaps to reduce contention. This doesn't eliminate it because every allocation/deallocation that requires a call to VirtualAlloc and friends has contention.


  12. 5 minutes ago, darnocian said:

     

    Ok. I was just hoping for some sort of example, as the SameValue/isZero is a generalised approach. 

     

    So back to the original post, I guess we can just take it that there are some issues with the new version of the compiler, and will need to log some issues with QP once it is back again.

    Sometimes you do compare up to tolerance. Sometimes that's the right way to do it. But you need to know how to choose the tolerance. 

     

    And it's definitely wrong to say that one must never compare exactly. Sometimes you can. Although delphi rtl works against you. For instance you'd hope to be able to convert floats to text, and back, and get the same value. In Delphi using the rtl functions this isn't always the case. Embarcadero have known this for more than a decade and not done anything yet. 

     

    I just don't think they have the resources required to prioritise this given all their other commitments. 

    • Like 1

  13. 21 minutes ago, darnocian said:

    I'm still interested to know more about any alternatives to using epsilon for comparison. I've only seen subtle variations on the same theme, and always keen to learn something new.

    It depends on what you are comparing, what algorithms are involved etc. 


  14. 3 hours ago, Attila Kovacs said:

    I read something in so from you but I can't find it, it was about the whole rtl float handling is rubbish or something like that. Not sure about the details.

    Well, that's true. Lots of thread safety issues with how it handles floating point control status. But that's not that same as having a math lib. 


  15. 7 hours ago, darnocian said:

    The rationale for it is when it comes to rounding due to the way the float is represented with limited bits... there will be to mismatches. You may be lucky, and comparison may work, but the epsilon comparison approach is a failsafe method to cater for rounding issues in the process of various calculations.

    I know how floating point math works, it's been what I've done for a living for the past 30 years. 

     

    Using arbitrary epsilon values is not failsafe and relies on luck. 

×