Jump to content

Tom Chamberlain

Members
  • Content Count

    64
  • Joined

  • Last visited

Posts posted by Tom Chamberlain


  1. We use a set of shared resource files,  one for the EXE's and another for the DLL's (the filetype changes) and leave the IDE out of it.  We have our own mass build process for production (the only time we care about the version numbers and things) with the option to update the version number, it changes and rebuilds the resource files before starting any compiles so we always have the correct versions for production.


  2. 12 hours ago, dormky said:

    Well, there's the client's own IT service (do not ask me why the IT service can't perform this simply backup, it's beyond me) that can access everything. Apparently we need to tell them what to install and how to use it.

    I'm honestly going to leave this request unanswered and hope people forget about it.

    Your joking right?  Is your database so large that it is not in a virtual server (hopefully on redundant hardware) with at least some sort of enterprise backup solution at a VM level?  Is the client so small or so cheap they do not understand this is not a software vendors job?  This sounds like a 'sales job' making you to come up with a solution for a non-problem or a completely incompetent IT service.  I am betting on the latter.

     

    Sarcasm:

     

    Tell them they need an SAS 18TB LTO tape drive attached to the server and something old-school like Restrospec or Backup Exec installed on the server for file-level backups and about 10-12 hours of down time each night to do the backups.  Don't forget to include someone has to swap that tape every day and they will need at least 100+ tapes for daily's, weekly's and monthly offsite storage.  Maybe you should offer to pickup the backups every morning and keep them safe offsite for them, for a extra fee of course.

     

    Sorry I slipped into a 1990's rant there, just glad I do not have to deal with this type of thing anymore.  Not making fun of anyone who still has to deal with tape backups I know their value to large organizations and disaster recovery plans.

     

     


  3. We have also been using ReportBuilder Enterprise for 15+ years since QuickReports was not capable of somethings we wanted to do so many years ago. 100's of reports later and that doesn't count the 'End User' reports the users have created themselves.  When COVID hit switching to emailing reports and invoice's saved so much time and paper, we have gone paperless and are not going back.  We have not had any performance issues (that were not self-inflicted) and when we have had technical issues, support is fast and responsive with patches.  I would also recommend ReportBuilder.


  4. 15 hours ago, Willicious said:

    It's the idea of instances-of-classes that keeps catching me out. If a class is defined somewhere, it should be callable as that class. Why do I have to redefine/redeclare it in a different unit? Why not this instead:

    Unit 1
        Class1

              PropertyA

     

    Unit 2

        uses

             Class1

     

    if Class1.PropertyA etc...
     

     

    The one thing no one has pointed out:

     

    Unit1

      TClass1

         PropertyA

     

    Unit2

      Uses

        Unit1 {the unit name, not the class inside the unit name}

     

    Another useful link is https://docwiki.embarcadero.com/RADStudio/Alexandria/en/Programs_and_Units_(Delphi)

     

    • Like 1

  5. If you own D11.3 Enterprise or Architect and all your apps are Delphi then you can create your own solution with DataSnap.  You can use the TDSClientCallbackChannelManager to register client applications with a DataSnap server.  Your client applications make function/procedure calls to the DataSnap server using regular types and classes to do what you want, DataSnap will do all the marshalling for you both ways.  We use such processes to track concurrent usage with a 'user' object we pass that contains user name, machine name, IP, database they are connected to, etc.  We have created our own 'message' structure that allows us to send notifications to individual clients or all registered clients via the callback or issue 'commands' to the clients to close the application gracefully if the user forget to logout at the end of the day.  The callback from the server sends a TJSONValue so you can build a simple or very complex message architecture.  (Interacting with the client GUI requires some TThead.Queue calls but this will be true of any solution)

     

    If your apps use multiple languages then something like ActiveMQ, ZeroMQ, RabbitMQ, etc. would be a better solution, but if your apps are all Delphi you may already have the tools/tech you need.  That said, if you have more than 5,000 active clients I would not use DataSnap, but look for something that scales better than DataSnap.  We are small and only have 20-30 active clients at most so DataSnap is a simple and flexible solution for us.

     

     


  6. 4 hours ago, Lars Fosdal said:

    CoInit/CoUnInit -  Prior to starting DB components, and after the last disconnect

    I do this because I have zero control over what a DB driver does with regards to COM.

    Interesting, we only call CoInit/CoUnInit in new threads before/after new connections are created/freed, never in the main (GUI) thread which has DB queries and data-aware components (to many, but 20+yr old code) but only one shared DB connection for the GUI in a data module.  But every thread inside services get CoInit/CoUnInit's.  We have only ever used M$/SQL, used the BDE for the first 5yrs and switched to ADO in 2003 or 2004 which has be sufficient for our needs until changes to M$/SQL date field types have forced us to make the switch to FireDac and the latest ODBC SQL drivers.  We have had some learning pain's with FireDac but nothing like theses issues. 

     

     


  7. In our 24+ year old system we have a central TDataModule with the connection and shared data access for things used everywhere like customer information retrieval.  Then each form has form specific queries/datasets/command and datasource components dropped on them (no tables) for the form related data.  We don't have a sea of DB components on most forms, but if I could go back 25 years knowing what I know now, I would do it all through objects and/or use some type of ORM and have no DB components anywhere.  Now, where is my Tardis?


  8. Do not waste your time, find a new job and leave them in the past.

    Got reprimanded once for replacing 2000+ lines of code looking for values by doing 'if Edit1.Value = x' on about 200 edit boxes basically named Edit1, Edit2, Edit3..Edit200 on a form (don't ask), just a huge cut-paste, change the component name procedure. I reduced it to about 10 lines using FindComponent in a loop producing the same results.  I walked out of that job in less than 1 month after I figured out they (staff and management) refused to learn anything new. :classic_cool:

     

    Do not suffer fools.

    • Like 3

  9. It works better if you can break it down and use a key for the child objects also:

     

    TChildList = TObjectDictionary<UniqueKey, TMyClass>;

    MyList: TObjectDictionary<String, TChildList>;

     

    UniqueKey is a priority/sequence number in our system for each unique child, we use an integer.  This makes searching and processing child objects simple.

    var
     localChild: TMyclass;
    
    if MyList.ContainsKey(SearchString) then
     for localChild in Mylist[SearchString].Values do
      begin
    
        localChild.....whatever
    
      end;

    and

    if MyList.ContainsKey(SearchString) then
     if MyList[SearchString].ContainsKey(UniqueKey) then
      begin
       localChild := MyList[SearchString].Items[UniqueKey];
       
       localChild...whatever
      
      end;

     

    • Thanks 1

  10. 1 hour ago, Remy Lebeau said:

    In which case, I wouldn't even bother with calling Application.Terminate(), just skip calling Application.Run() instead:

    
    begin
      Application.Initialize;
      Application.ShowMainForm := False;
      Application.CreateForm(TfrmMainForm, frmMainForm);
      if frmMainForm.DoWeShowTheForm then
      begin
        Application.ShowMainForm := True;
        Application.Run;
      end;
    end.

     

    I thought there was a problem with skipping the Application.Run, that is why the call to Application.Terminate is in there.  It puts the terminate message on the que and the Application.Run processes it correctly and you get a clean exit.  My 'DoWeShowTheForm' actually is a splash screen with login prompt from a dynamically loaded DLL so maybe that is why I did it with the Application.Terminate, it has been many years ago since that was written.  I did try it without the terminate and skipping the run and it does work, FastMM did not report any issues, but I still put the terminate back in :classic_smile:


  11. You could change the DPR file like so:

    begin
      Application.Initialize;
      Application.ShowMainForm := False;
      Application.CreateForm(TfrmMainForm, frmMainForm);
      if frmMainForm.DoWeShowTheForm then
       Application.ShowMainForm := True
      else
       Application.Terminate;
      Application.Run;
    end.

    Then in the normal form OnCreate set a flag and check it in the DPR, this never shows the form if you want to terminate the program, the forms OnDestroy runs so you can do any cleanup and no memory leaks.


  12. 16 hours ago, Oberon82 said:

    And how do you manage with delphi licence?

    We have named user license for Delphi and never run the same VM twice.  A Delphi license is only installed on one VM used by only the one named user.  We (there are only 2 of us) each keep a Windows 10 VM with Delphi 10.4 and another VM with Delphi 11.2.  In the event there is a problem with our dev machine we restore the backup copy of the VM to another machine and fire it up.  It is still the same VM and license, we treat them just like physical machines.


  13. I have been using VirtualBox for almost 10 years, always a clean install of Windows with a single version of Delphi per VM.  I use a shared folder with the host for all the source, only the host talks with the repos.  Delphi versions like 10 to 11 are always a new VM (enterprise lic for Windows) with a new repo chain for testing builds on a new version, just in case.  I do make monthly backups of the VM images and use the CloneVDI tool to compact the image/drive to save space on the host.  And I never use the VM's to access the internet outside of Windows updates and trusted 3rd party update tools that require access from the installed VM.  Make sure you do not let your anti-virus scan the holder were the VM images live.  This makes laptop replacement easy and with a couple of hand tweaks to the vbox file they can run (not at the same time) on my Linux machine if needed.  I run multiple VM's all day long; dev and SQL VM with services to simulate the user environment.

    • Like 1

  14. 1 hour ago, FPiette said:

    The solution I depicted with TCP socket can as easily applied when using a server dedicated to be a "exchange" server. MQTT is a complex product and Delphi implementation is likely to depend on a number of other product.

    Whatever solution you select, I advise you to only use products for which you'll get FULL SOURCE code and you'd never use prebuilt libraries or DCU but instead recompile yourself everything. This is the only way to be sure that in the future you'll be able to have your own product survive to the death of the external product you used.

     

    I agree, if you have the enterprise version of Delphi you could use DataSnap to create your own "exchange" server that way you control both sides.  We use the DataSnap callback feature to register and unregister clients for by-directional communications via the TJSONValue so we have a dynamic message but still have structure.  We validate concurrent usage licensing, request "check-ins",  send user messages and can even terminate the clients.  There is some DataSnap overhead but with a few clicks and every little code you can add authentication, compress and encryption. (encryption does require the deployment of 2 OpenSSL DLL's

     

    There is nothing wrong with MQTT or other options, they are all good.

     

     

    • Like 1

  15. Do not forget to check the indexes on the M$/SQL server tables; missing indexes or where's and join's on fields without indexes are also performance killers.  If you have blob/varchar(max) fields, never do a Select * From. If you have any tables with millions of records and lots of indexes rebuilding the indexes could help a little but that can take time, best done in off-hours.  You may also want to look at the database Properties -> Options -> Legacy Cardinality Estimation value, if it is ON try turning it OFF and see if that makes any difference. (in test of course)

     

    • Like 1

  16. 2 hours ago, PeterBelow said:

    For database BLOB fields you generally cannot use a generic TStream, you need to ask the BLOB field to create a TBlobStream descendant specific for the database engine in question. In your case you first create the blob stream for the local database, store the field content in it, the create a blob stream for the server database, copy the content of the first stream to it, then load the contents into the server db field.

    This is how it is suppose to work (for FireDac and ADO), you create a blob for the database engine TFDBlobStream, giving it the DB field with write access, then save into that blob stream. (yes free the blob before the post)  I'm using rich text here with embedded fonts and graphics so a PNG is not any different.

    var
      BlobStream: TFDBlobStream;
    ....
        dbQueryObj.Append;
    
        ....
    
        BlobStream := TFDBlobStream.Create(dbQueryObj.FieldByName('Notes') as TBlobField, bmWrite);
        try
         rveEditor.SaveRTFToStream(BlobStream, False);
        finally
          BlobStream.Free;
        end;
        dbQueryObj.Post;

     


  17. Another option if you have Delphi Enterprise would be DataSnap so you don't have to deal with TCP/UDP level stuff directly. :classic_cool:

     

    We have a DataSnap service running on a server that all the clients talk to (they know the servers IP).  We send this service the User ID, IP address and other information via an object (DataSnap does all the marshalling) and the service tracks the initial connection time and last seen time.  We store this object in an object list in memory in the DataSnap service.  When the clients connect to this service the DataSnap TDSTunnelSession gives us the unique client and session id's, we then use the DataSnap DSClientCallbackChannelManager.RegisterCallback method to register a callback method to the client with the server.  Now the server knows the IP address and can make a callback to any/all the connected clients passing a TJSONValue so you can pass anything you want and again DataSnap does all the marshalling.  We have built our own messaging/command structure using this TJSONValue and calling methods in the DataSnap service to send messages.  The server then loops thru the list of clients and sends the message to the client(s) via the registered callback.  We can request the client(s) 'Check-In' to is if they are still there, get a list of other/all connected clients, send messages that pop the Windows tray notification and we can even force a termination of the client(s) if needed thru an admin client.  We have even enabled authentication, compression and encryption of all this communication with just a few DataSnap properties.

     

    This does put all the work on the server and the clients cannot work/send messages without the server so this will not work if you want direct client to client communications.

     

    We built this years ago and it works well for us, but you could build the same type of thing with MQTT, that was our other option.  DataSnap just gave us a few more options/control we were looking for at the time even if there is some DataSnap overhead and a learning curve.  It was also something we already owned and we would not need to include another piece of open source in our software and knew would continue to work and be updated with future versions of Delphi. (we hope)

     

     

×