Jump to content

David Schwartz

Members
  • Content Count

    124
  • Joined

  • Last visited

  • Days Won

    1

David Schwartz last won the day on February 6

David Schwartz had the most liked content!

Community Reputation

26 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. David Schwartz

    Any advice when to use FileExists?

    Historically speaking, the old DOS FAT file system was notoriously inefficient when it came to looking up filenames. The original FAT directory segments held something like 63 x 8.3 filenames and the last entry was a pointer to the next segment. Each time you ask for a file, the FAT file system would begin a linear search at the first directory segment and scan through each one to the very end of the list before it knew if it existed or not. It was always faster to get a list of files first into a memory buffer, sort it, then search that first before probing the file system. But the real solution laid in splitting things up so you didn't have more than a few directory segments anywhere. That usually led to something where you had two directory levels: one that was simply A to Z, and the next that contained folders or files that begin with that letter. Sometimes it would help a lot to go down and additional level. Also, FAT directory segments were filled from front-to-back. If a file was deleted, the slot for it would be cleared. A new filename added to a folder would take the first available slot. So putting thousands of files ('n') into a FAT-based folder would take on average O(n/2) tests for filename equality. Going from front to back probing with names, whether it was an ordered or unordered folder, would take the same amount of time. Introduction of long filenames added further delays if you used the long filenames. I'm not sure what FAT32 did to improve this, if anything. Windows introduced caching of directory segments that sped things up considerably. From what I understand, NTFS introduced something closer to what Unix does, and resolved this inefficiency somewhat, although I never did anything that pushed the limits of an NTFS-based file system the way I used to do with old FAT-based file systems. With faster CPUs and more extensive caching in Windows, the problem seemed to fade out. Part of the solution was to use FindFirst/FindNext, even if this meant you built a list in memory to search against before asking the file system for anything.
  2. David Schwartz

    Efficient list box items with Frames

    I would def not use a TListbox for that purpose. Check out TFlowPanel or TGridPanel. I think one of them would be best suited for what you're looking for. Maybe on a TScrollPanel. https://stackoverflow.com/questions/3254026/how-can-i-scroll-the-content-of-a-tflowpanel I don't know off-hand if I'd use something derived from a TFrame or TPanel for the inner blocks, but I'd probably try the TPanel first, just because TFrames can be a PITA to deal with at design time. There's also this: https://torry.net/pages.php?s=79 Look for: "DictaSoft Layout Pack v.1.0"
  3. David Schwartz

    Any advice when to use FileExists?

    The Windows file system segments get cached. The first time you call FileExists, it might take a little bit of time to load the segment from disk if it's not cached already. But after that it would run very fast unless you loaded so much stuff to memory in between that it got flushed out of cache. It's not something I'd even worry about. In fact, unless your files are really huge relative to free memory, they'd be cached as well. I don't know if the underlying Windows logic would load them in segment-sized buffers (4k or whatever) or try to suck the entire file into memory at once if it detects that there's sufficient free memory. Windows has gotten pretty darn good at managing its virtual memory. Way back in DOS days, we worried about this stuff. But when you've got gigabytes of free memory and you're loading up files smaller than 1MB or so, it's mostly irrelevant. I've got an app I built a few months ago that loads up all of the *.pas and *.dfm files it finds in an entire file tree ... several hundreds of them. Some are over a MB in size, but the average size is around 80KB. It takes less than a second to load them all from my SSD into memory streams and attach them to the .Data property on a TTreeview. Then it takes 30 seconds or so to parse them all looking for whatever stuff I want using regular expressions. RE's aren't the fastest things to use for parsing data; they're just a heck of lot easier to use than building a dedicated parser. Needless to say, as far as I can tell, the vast majority of the run-time overhead is taken up in the memory manager allocating new objects, copying data identified by the RE parser into them, and adding them into other containers. And in most cases, the biggest delays are caused by the VCL UI components, usually because I failed to call DisableControls on visual components while I'm loading them up. When I do that, UI progress updates are STILL causing the lion's share of processing delays! I mean, I can cut my parsing time in half if I don't want to tell the user what's going on, but that's poor UX design. Most people would rather be kept updated on long-running processes than having them performed in the dark with no idea how long they should take. I cannot imagine how old and slow of a computer you'd have to be running where calls to FileExists would take a noticeable amount of overhead. Maybe a 400MHz Celeron with 256KB of RAM and a 20GB HDD or so. We're talking something from the late 90's. I know there are small SBCs like that are still being used for process-control apps, but even Raspberry Pi's have more computing power than that! I'd urge you to focus on the efficiency of the value-add you're providing, not wasting time second-guessing the imagined inefficiencies of the OS. Get your program working, then look at where it's spending the most time. Most likely, about 98% of the actual execution time is going to be spent in your code or in the UI (eg, VCL). Not the OS interfaces -- unless your app is doing a HUGE amount of interaction with the file system. Even DBMS systems exhibit amazingly low file system overhead most of the time.
  4. David Schwartz

    Any advice when to use FileExists?

    If you're worried about the parsing taking a relatively long time, then simply create a TMemoryStream and do LoadFromFile and parse that instead
  5. David Schwartz

    Wow, first time using repeat ... until

    That's true. I guess I spoke too hastily. That said, I rarely use repeat-until in Delphi, and I wish the for loop was more like C's for loop. That's it. 🙂
  6. David Schwartz

    Wow, first time using repeat ... until

    I don't really care what it's called. Every language has its quirks. Why does Pascal have three distinct forms of loops? Seems like overkill. And each one has its own limitations. What would you have called it? I'd nominate either "loop" or "repeat" -- but this is ONLY for the C/C++ language.
  7. David Schwartz

    Wow, first time using repeat ... until

    Sorry I'm not sure what's your point. I like the way the for loop is structured in C/C++ because all three relevant state mechanisms are there: the initializer; the condition to continue or stop; and the iterator used at the end of the loop. Perhaps it's inconsistent syntactically, but I find it much more consistent in terms of how I think of loops working, since it completely controls the statement or block following it. I can't tell you how many times I forget to put the iterator at the bottom of the loop block in Delphi. It's pretty damn obvious when it's missing in C/C++ FOR statements, and even the compiler can flag it for you. But in Delphi, you get no help from the compiler, and it's not even obvious something might be missing just by looking at the code. And if you happen to put it in the wrong place in the loop block, it can cause you no end of headaches. The C/C++ approach is much safer and easier to see when it's missing or wrong. There's also the problem that the Delphi approach requires you to reference a loop var within the block that may otherwise not even be needed.
  8. David Schwartz

    Wow, first time using repeat ... until

    Back to the original topic regarding repeat ... until: I use them very rarely myself. I really miss the for statement from C/C++ in Delphi, because it lets you specify the initial condition as well as the test and incrementor all in the same line. The problem with most loops is you need to do something first to "prime the pump" so to speak, before you can get the while or repeat loop running. A classic one is iterating through the results of a DB query: qry.Open; . . . qry.First(); while not qry.EOF do begin . . . qry.Next(); end; In C/C++ that could be done as follows: for (qry.Open; not qry.EOF; qry.Next) { . . . } I think this is much nicer.
  9. David Schwartz

    Wow, first time using repeat ... until

    It's still GPF in my mind! I have no idea when they renamed it. IIRC, GPFs were frequently accompanied by BSODs, which still happen every now and then. (I'll let someone ask what they are... )
  10. That's certainly true. What I had in mind was DI performed through Constructor Injection as well as Getter/Setter (or Property) Injection, rather than the inheritance type of Interface usage. Mark Seeman talks about the inevitable tendency to turn multiple parameters passed to a class via CI into a Parameter pattern that evolves to a standalone record/struct or class, so he suggests short-circuiting the inevitable by using a Facade pattern to pass one or more parameters through a separate object for CI. This could also be managed with an Interface. Getter / Setter / Property injection might not fare so easily, unless you used this approach to pass in a Facade class with lots of values. Usually, they're one-to-one, however.
  11. David Schwartz

    Tool to convert form components to run-time and vice-versa

    I'm aware of GExperts, but I thought it was something else.
  12. It wasn't my choice, and I personally don't care. It's just what I've been given to work with. Upper Management makes all sorts of decisions regardless of my input, for reasons that are opaque to me, and I've learned it's safer to just accept their decisions.
  13. I seem to recall seeing someone post about a tool they wrote that converts components dropped on a form into equivalent code that creates the same components at runtime, and possibly vice versa. IIRC, the context was something talking about the automated testability of forms, and/or avoiding problems with source control tools. I'd like to get hold of that tool for some testing.
  14. Your example highlights the fallacy of your assertion. In fact, inheritance locks you into whatever resources are required by the parent class! In this case, Indy. FTP is a very generic thing. Why inherit from a specific concrete implementation that locks you into it's view of the world for ever and ever? I think a far better approach would be to "inherit" from an abstract base class (a.k.a. "interface") that would allow the use of virtually ANYTHING that supports generic FTP features, including the one you happen to prefer today. But at that point, you're about 50/50 whether that approach is better than using composition (HAS-A) to include an API that offers the same abstract interface, possibly as a drop-on-the-form component. Unless your goal is creating a bigger, better, faster, or more specialized version of the base class, then inheritance of a non-abstract base is silly. Without multiple inheritance, you're locked into specializing or expanding a single concrete base class anyway. So if you're building a class that EMPLOYS FTP, for example, but IS NOT INTENDED to BE a "better" FTP service, then inheritance is clearly a very poor choice. And inheriting from an abstract class presented as an Interface is simply a way of mixing-in a single type's namespace with your component's namespace. It's effectively just "anonymous composition" because you simply refer to the methods and properties as if they're part of your class, without having to refer to the name of the object containing them. Finally, you can do dependency injection just as easily without interfaces.
  15. Mida is a tool that was written to simplify translating VCL apps to run under Firemonkey. It evolved to the point where V5.x Studio edition started supporting arbitrary rewriting of components without requiring VCL->FMX. I'm looking at using it to translate a bunch of VCL apps that use Allround Automation's DOA Oracle components to use PgDAC components instead. It looks like it should do the trick, but it requires a file in INI format that isn't clearly documented. They have some examples for BDE, FD, and a couple others. But I can't find any detailed explanation of how to go about creating the .mida file for something else, or even how to set one up. Are they built entirely by hand? Or does the Studio version let you build them somehow? They're a little slow on responding to support requests; the videos on YouTube haven't been updated since V2 was released; and I can't find any help files or documentation anywhere. (I have a licensed version of V5.6 Studio.) So I thought I'd ask here and see if anybody has any experience they can share.
×