Jump to content

Mark Williams

Members
  • Content Count

    274
  • Joined

  • Last visited

Posts posted by Mark Williams


  1. 12 minutes ago, Attila Kovacs said:

    Ok. How do you create FDManager (Create(nil)?, not that you free it twice) and where is the var declared?

     

    See post two up. I don't think the problem is anything to do with FDManager. The reason it was hanging at the point of freeing was due to the call to the threadlog and my freeing of the threadlog before the threaded call had been made. My CloseServer now executes fully and appears to free all resources its requested to free. There has to be something else I am missing. Something I am creating somewhere and not freeing. No idea what though! Checking now.


  2. Just now, Attila Kovacs said:

     

    How can you see that? Hourglass, or the time until the dll is released?

     

    Terminate WWW via services. I can see my logfile is updated almost immediately with message "Server Closed" and then the progress bar from Services takes well over a minute from this point to complete the shutdown.


  3. 28 minutes ago, Attila Kovacs said:

    ok

    -what if you remove the .free? Will it leak memory or does it get freed when IIS unloads the dll?

    -what if you create your FDManager with "TISAPIApplication(Application)" as owner? It's a TComponent descendant so it will free FDManager.

    Further process of elimination. It was nothing to do with the call to free FDManager. It was the AddToLog calls that were causing the problem. These are threaded using the FThreadFileLog, which I free in the final part of CloseServer and before the threads called in CloseServer had terminated. I have changed these calls so that they are now not threaded and also as a double precaution added a check in ThreadFileLog's destroy event to make sure  no threads are running before it is shut down.

     

    My CloseServer procedure now frees all the resources as expected, but the application pool is still taking an age to close down. There must be some resource I am creating and not freeing. I will check all my code and if I can pinpoint what it is I will update.

    • Like 1

  4. 10 hours ago, Attila Kovacs said:

    I'm not sure why do you need a critical session for freeing something? Do you think other threads would still using it? And you just freeing it?

    Just asking.

    A good question! There is no need. Removed. But that's not the cause of the problem on shutdown. Any ideas why it would be hanging at this point?


  5. On 10/16/2019 at 3:15 PM, Yaron said:

    This is my original thread in which the nice people of this forum helped me identify some of these issues:

    @Yaron It's taken me a while to revisit this and as usual it has not disappointed: days thrown into the abyss trying to change the structure of my ISAPI DLL!

     

    I have implemented points 1 to 4 mentioned by Yaron. Point 5 is causing me a problem. I am having a problem with the closing down of the application pool. Yes I get a long delay and my closing section does not fire.

     

    My dpr now looks like this:

    library MWCWebServer;
    
    uses
      Winapi.ActiveX,
      System.Win.ComObj,
      System.SysUtils,
      Web.WebBroker,
      Web.Win.ISAPIApp,
      Web.Win.ISAPIThreadPool,
      WinAPI.Isapi2,
      WinAPI.Windows,
      System.SyncObjs,
      system.classes,
      WebModule in 'WebModule.pas' {WebModule1: TWebModule};
    
    {$R *.res}
    
    
    
    function GetExtensionVersion(var Ver: THSE_VERSION_INFO): BOOL; stdcall;
    begin
      Result := Web.Win.ISAPIApp.GetExtensionVersion(Ver);
      CriticalSection := TCriticalSection.Create;
      StartServer;
    end;
    
    {I have removed the call to TerminateVersion as it wasn't firing
    function TerminateVersion(var Ver: THSE_VERSION_INFO): BOOL; stdcall;
    begin
      Result := Web.Win.ISAPIApp.GetExtensionVersion(Ver);
      CloseServer;
      CriticalSection.Free;
    end;}
    
    {I have added this procedure as shown in the linked post provied by Yaron}
    procedure DoTerminate;
    begin
      CloseServer;
      CriticalSection.Free;
    end;
    
    exports
      GetExtensionVersion,
      HttpExtensionProc,
      TerminateExtension;
    
    
    begin
      CoInitFlags := COINIT_MULTITHREADED;
      Application.Initialize;
      Application.WebModuleClass := WebModuleClass;  
      Application.MaxConnections:=200; //MaxConnection=32 by default;
      IsMultiThread:=true;
      TISAPIApplication(Application).OnTerminate:=DoTerminate;
      {Application.CacheConnections:=true;} //NB not necessary as cached by default
      Application.Run;
    end.

     

    My call to CloseServer now looks as follows:

    Procedure CloseServer;
    begin 
     CriticalSection.Enter;
      try
        if assigned(MimeTable) then
          try MimeTable.Free; except end;
    
        FDManager.Close;
        try FDManager.Free; except end;
        try AddToLog('Connection manager deactivated', leMajorEvents); except end;
    
        try AddToLog('Server Closed', leMajorEvents); except end;
    
        if assigned(FThreadFileLog) then
          try FThreadFileLog.Free; except end;
    
      finally
        CriticalSection.Leave;
      end;
    end;

    CloseServer gets called on shut down as expected. However, it hangs  at "try FDManager.Free; except end;" It appears to execute the line of code above (ie to close FDMAnager), but it does not appear to execute the call to free FDManager not does it it get caught by the try except nor by the try finally end. It just hangs at the call to free. And this is when the application pool stalls in its shutdown process.

     

    Does anyone have any ideas why?


  6. 1 hour ago, Uwe Raabe said:

    If the query is a SELECT statement or something that also returns a record set you should use Open instead of Execute. Therefore you should add a Close before changing any parameters.

    Sorry. It should have said open not execute. I call the appropriate command in another script which handles various errors and post details to a database. Didn't want to confuse by including all that code, but managed to confuse by not doing so!

     

    1 hour ago, Uwe Raabe said:

    Therefore you should add a Close before changing any parameters.

    I was sure I had already tried Close and caused an AV. However, it works! Thanks.


  7. On intial setup I set up queries with SQL.Text and prepare them ready for use and reuse. Some of these queries use params.

     

    On first usage the queries work as expected. When reused they simply are not fired at all.

     

    As an example:

    with DAC.FDQueryDocCats do
      begin
      	ParamByName('CASEID').AsInteger:=FProject.CurrCaseID;
      	execute;
      end;	

    Works as expected on first use.

     

    When it is run a second time using the debugger I can see that the correct CASEID value is being passed in and that the query appears to execute, however, it doesn't. The query dataset doesn't update. I have enabled tracing and on checking the trace it is clear that the query does not run at all.

     

    I have tried "EmptyDatSet", "ClearDetails", but same effect.

     

    If however, I do this:

    with DAC.FDQueryDocCats do
      begin
    	SQL.Text:=SQL.Text;
      	ParamByName('CASEID').AsInteger:=FProject.CurrCaseID;
      	execute;
      end;

    all works well, however, I lose any advantage in having prepared my query in the first instance (although I have no idea just how great that advantage is).

     

    It would seem setting the SQL text clears data or resets flags that I am missing. I have professional delphi not enterprise so can't see the FireDac code to see what it is doing so I can copy.


  8. 5 hours ago, Lars Fosdal said:

    Based on your updated sync requirements description, I'll stop envisioning a new generation of your system, and instead tell you about how I solved a similar need for synchronization.

    I'm not convinced I have such a major issue here, but only time will tell and I'll have to make a judgement call at that time.

     

    5 hours ago, Lars Fosdal said:

    Today, these data would have been retrieved through a REST interface.

    Give it a REST! Joking aside, I get the message. REST is the way to go. Although by the time I get round to looking at it, it will probably be out to pasture with the brontosauri.


  9. 17 minutes ago, Lars Fosdal said:

    If you have thousands of documents related to a case - how many of those would be opened in the same sitting?

    I think there may be some confusion. DOCS_TABLE does not contain the actual documents, rather it contains only data relating to the documents (such as data, author, file type, when uploaded etc. I don't download all the documents in one hit, just the data relating to them. The documents are stored on the server as files and downloaded only when they are needed.

     

    I could (and did until recently) just load the data from the database on startup. However, this obviously gets progressively slower as the number of records increases. It also struck me as pointless downloading the data time after time where it had not changed or was little changed. So I thought it would be better to store the data locally. For a large dataset (including heavy duty encryption of the local file) I get around a 20% time saving and a lot less traffic on the server. 

     

    The actual documents when downloaded are cached (if that's what the user specifies) in a temp folder.

     

    28 minutes ago, Lars Fosdal said:

     

    Personally, I would consider having it all server-side with a web UI.

    It is all server side save for some caching. Whilst there isn't a web UI it is not a path I want to go down. Have used them extensively in the past and I don't think it is appropriate for the current usage for various reasons. Quite happy with a desktop app and using internet components and/or FireDAC. It works well and I am long way down the road. 

     

    32 minutes ago, Lars Fosdal said:

    

    Then again - this is the kind of problem that Office 365 and SharePoint really excels at - including tried and tested security models from all angles.

    That's as may be. However, I am 5 years down the road, the software works as  I want it. I am thinking of changing the way I load and refresh data not thinking of throwing out baby with the bathwater!:classic_smile:


  10. 42 minutes ago, Hans J. Ellingsgaard said:

    ut you would get rid of all the trouble of sync‘ing the data, if you kept the data on the server.

    I understand that, but I am keen to speed up user experience at client end and reduce traffic at server end and doing it by way of a local file does that.

    43 minutes ago, Hans J. Ellingsgaard said:

    With a REST service you will probably be able to load the data much faster, and save a lot of bandwith

    I will now demonstrated my ignorance of REST services. I don't understand how it will result in faster loading of the same data and less bandwidth.


  11. 6 hours ago, Lars Fosdal said:

    It is no walk in the park. Each database has numerous functions / best practices that go beyond "standard" SQL.

    Noted. Just working in Postgres at the moment and trying to finish an app. Offered in Postgres initially and intending to offer connectors for other dbs when requested. If at the time of doing so I experience any issues I may then opt for  REST server.

    6 hours ago, Lars Fosdal said:

    Do you need to encrypt the traffic between the client and the server? 

    FireDAC SQL Server driver supports encryption. No mention of it in the Oracle FireDAC doc, or the PostgreSQL FireDAC doc.

    Yes. It's there for PostGres also. You use the PGAdvanced parameter of the connection which gives you access to Postgre's Database Connection Control Functions.

    6 hours ago, Lars Fosdal said:

    Why must the documents be downloaded?

    They don't have to be. But with large amounts of data it I find it is much faster to load the data from a local file and refresh it from the server in the background. 


  12. 2 hours ago, Lars Fosdal said:

    I would hide the access behind a REST service

    I've not really looked into possibility of using REST. If I'm not mistaken I would need the enterprise edition of Delphi rather than the mere professional to implement REST services. Not a deal breaker in itself, but I am not convinced that I need to go down that route (time is a major constraint for me at the moment). FireDAC seems to make it fairly simple to change horses between different databases. Isn't it then a question of ensuring your queries etc are compliant SQL and, if not, adapted as necessary for each database you support? 


  13. Modern scanners on auto color detection mode seem to be very efficient in detecting whether a document is a text document and whether to scan it as monochrome even though the image contains some degree of color.

     

    It has to be more than a pixel by pixel analysis to analyse the extent of the color in any image.

     

    Does anyone have any idea how this is done or even better if they can point me o some example code?

     

    I have searched on Google and the only example I can find is written in Python and I have no idea how to adapt for Delphi (Python example).


  14. 20 hours ago, Clément said:

    Is it the retrieval time from the SQL Server: Make sure the SQL optimizer is using the expected index.

        Is it downloading data to local store: Try compress , or is it the local loading time?

    I'm not sure how to measure how much time  it takes to retrieve from the server and how much to locally load. However, I know that loading from a static file is significantly faster than loading from the server so I'm pretty sure the bottleneck lies with the retrieval time.

     

    I'll have a look at compression. Haven't used it so far and didn't know FireDac supported it. Although my principal goal is to minimize resources server side. 

     

    20 hours ago, Clément said:

    using IN ( long list ) is not a good option in ANY database. As other suggested, use a proper table to insert the IDs and join with that table. if some SQL server don't support session table you can simulate one by creating physically the table with two columns, one with some user identification ( user name, user login, machine name or machine IP ) and another an ID. But do use a join to retrieve those rows.

    Noted. I will try and do it via temp tables.


  15. 26 minutes ago, Lars Fosdal said:

    In a good database, the cost of complicated where statements usually is limited once you have tweaked your indexes.

    Not sure you can get away from those where statements if the client will be receiving NEW rows once every now and then.

    It's a system for handling documents in litigation matters. So the number of documents can range from a few hundred to hundreds of thousands. Whilst a case is in its early stages the documents will (depending on the nature of the case) be loaded in large tranches. When it gets closer to its conclusion, you will get much fewer new documents being added to the case. I guess a count of how much new data there is and then decide on whether to use where statement or pass in an array of ids. 

     

    30 minutes ago, Lars Fosdal said:

    Not sure how good Postgre is with temp tables., but in MS SQL you could write your existing id array to a temp table, and do a join query.

    Probably more efficient than trying to match thousands of IDs in an IN statement.

    Not sure, but I will look into it. However, I need a solution that will work for Postgre, Oracle and MS SQL at the very least, so it has to be compatible across the board, although I suppose I could write a different routine for each server if needs be.

     

    As for the IN clause, if it is particularly large in relation to the total number of records I could just load the lot via a thread and then dump the static data when I have the new data. If it's not too large relative to the total number of records, but still relatively large for an IN clause I guess I could submit in batches.


  16. 25 minutes ago, Uwe Raabe said:

    Whatever you implement, you should always measure the performance against the brute force update.

     

    If the server were a recent InterBase one, you could make it simple and fast with Change Views, but these are not available with other databases. 

    Unfortunately, I need to be able to offer a range of databases.


  17. It can be a pretty massive table, which can take a long while to load. It obviously loads much quicker from local file (even though encrypted). I am keen to avoid passing unnecessary traffic to the server, hence suggestion of local file and update with only the data necessary. There could be just a handful of records to update. I really don't want to download the lot again just for that.

     

    I think this is a sensible way to approach it. I'm just not sure that the way I propose to handle it is the most sensible. But I am keen to avoid full reload of whole table.


  18. I'm trying to work out the best way to refresh data loaded from a local file with data from the server.

     

    Using FireDAC with PostgreSQL. However, solution needs to be a general one as it will eventually need to work for SQL, Oracle etc.

     

    I have a table (DOCS_TABLE) which can vary greatly in size. DOCS_TABLE has a timestamp field (LAST_UPDATED). As the name suggests this records the date on which data in the record last changed.

     

    When user's open an app if they haven't queried DOCS_TABLE previously it is loaded via the server using a fairly complicated WHERE statement(COMPLICATED_WHERE_STATEMENT) which involves a number of joins to establish which records from the table the user is permitted to access. When the user closes the app the data from DOCS_TABLE is stored locally along with a timestamp to record the date and time the data was last refreshed (STORED_TMESTAMP).

     

    Next time the app opens it loads the data from the locally stored file. It then needs to ensure the user is working with up-to-date data. 

     

    At the moment I am running a refresh query SELECT [fields] FROM DOCS_TABLE WHERELAST_UPDATED >[STORED_TIMESTAMP] AND [COMPLICATED_WHERE_STATEMENT].

     

    I use the resulting data from the refresh query to update the in memory dataset holding DOCS_TABLE.

     

    This works, although it doesn't deal with records that were available at time of last saving locally and have now been deleted or access denied.

     

    As such,within the app, I run a check to make sure the user still has access to any record  before trying to do anything with it, but it's not a terribly elegant solution. It would be better if such items were removed soon after loading the locally saved data.

     

    I have some thoughts on how to deal with this, which are below. However, I am concerned I may be overcomplicating things and that there may be much simpler solutions to this problem. 

     

    1. Load the data from the local file.
    2. Run a thread for the following:
    3. Run a query (ID_QUERY) to ascertain which rows are now available to the user:
      SELECT id FROM DOCS_TABLE WHERE [COMPLICATED_WHERE_STATEMENT]
       
    4. Check the locally saved data against the result of this query to see what rows are no longer available to the user and remove them.
    5. Build a list of ids from the locally saved data (EXISTING_ID_ARRAY).
    6. Check the locally saved data against the results from ID_QUERY to see whether there are any new records to be added and build a list of the ids (NEW_ID_ARRAY).
    7. Run the refresh query using the arrays: SELECT [fields] FROMDOCS_TABLE WHERE (id in ([NEW_ID_ARRAY])) OR (id in [EXISTING_ID_ARRAY] ANDLAST_UPDATED >[STORED_TIMESTAMP]). Subject to my whole theory being cock-eyed I am pretty sure NEW_ID_ARRAY is the way to go. The part that concerns me is  EXISTING_ID_ARRAY? Whilst it will cut out the use of the COMPLICATED_WHERE_STATEMENT and enable the query to focus explicitly on a group of records clearly identified,  I would think the size of the array, could become a problem. Is there a law of diminishing returns with an IN clause? For example, if there were 1M records in the table and 20 items in the array, I suppose it must be the case using EXISTING_ID_ARRAY will be quicker than using COMPLICATED_WHERE_STATEMENT. But what if the array contained 800K of ids? I guess it has to be significantly less efficient to use EXISTING_ID_ARRAY and more efficient to use COMPLICATED_WHERE_STATEMENT.

     

    I appreciate without providing full details of the structure of DOCS_TABLE and the various joined tables, the data being retrieved from it and the full nature of the COMPLICATED_WHERE_STATEMENT, I may be asking for a comparison between apples and pears. What I am really interested in is whether my logic set out above is sound or idiotic and any suggestions on how best to achieve what I am trying to achieve.


  19. 29 minutes ago, Fr0sT.Brutal said:

    Probably you could look at UniGui as well

    Had a quick look. Very interesting. Looks like quite a bit of work to produce a large app and also I think it requires replacement of all visual components with their own. Couldnt't see some fast conversion script for this. Maybe it's as easy as copy and replace of pas and dfm file class names.

     

    However, I rely significantly on virtualTreeView for display of db data. I don't think the treeview they provide would be suitable. Could convert to TListView, but I couldn't find that as an option on their components page. I don't use DGGrid's for various reasons, but could always consider that. 

     

    All the same, worth keeping an eye and it could well be useful for "light" version of desktop apps.


  20. That all sounds very promising. Thanks for the comments. My app is fairly heavy duty and with a huge executable (which seems to be par for the course with Delph nowadays). It needs to access several dlls and an ocx. The ocx is mine and could be (possibly should have been)  written as a dll. Are there any issues you know of with dlls?

     

    8 hours ago, Ian Branch said:

    I have experienced issues with themeing via TF so I have disabled it for TF based Apps.

    I use theming and would like to retain it, but it's clients who are asking for possibility of cloud app, so they can make the choice of living without it. I can keep in the desktop app version.

     

    12 hours ago, Larry Hengen said:

    I don't think hosting a large app is a good long term solution due to the server resources required, but it's certainly worth some investigation.

    I was under the impression that the license fee charged by Cybelesoft included the hosting of the app. However, the hosting shouldn't be a problem. Each client would probably set up their own internal and external servers to host it.  That leads to another question and that's what sort of degradation in speed have you experienced between desktop and cloud?

     

    12 hours ago, Larry Hengen said:

    For a smaller application it is certainly a solution that beats redevelopment as a  web app.

    What is actually involved in redeveloping as a web app? Is it possible in Delphi? I haven't seen anything anywhere dealing with this.
     


  21. Has anyone had experience of the above (from https://www.cybelesoft.com/)?

     

    It offers to convert Delphi apps into cloud based apps with just one line of code. Sounds too good to be true or is it not difficult to do this (never tried myself).

     

    If it is a difficult process, I'm surprise this didn't come up on any searches on this forum.

     

    Would appreciate feedback on above if anyone has experience of it and also generally on what is involved in the process using Delphi?


  22. 42 minutes ago, pyscripter said:

    https://quality.embarcadero.com/browse/RSP-26633

    The report contains a solution.

    Thanks for the link. It didn't seem to work for me. However, it gave me the idea to change my function to repaint the background of the glyph. Now works and will have to do for now.

    Function GetTransparentImageList(Owner:TComponent; SourceImages:TImageList; BMPIndex:Integer):TImageList;
      var
        bmp:TBitmap;
    begin
      bmp:=TBitmap.Create;
      try
        if SourceImages.GetBitmap(BMPIndex, bmp) then
          begin
            Result:=TImageList.Create(Owner);
            With Result do
              begin
                bmp.Canvas.Brush.Color:=StyleServices.GetSystemColor(clWindow);;
                bmp.Canvas.FloodFill(0, 0, bmp.canvas.Pixels[0, 0], fsSurface);
                AddMasked(bmp, clFuchsia);
              end;
          end;
      finally
        bmp.Free;
      end;
    end;

     


  23. When using a dark theme with TButtonedEdit and an imagelist with DrawingStyle set to dsTransparent, the image always shows with a white background. That's obviously fine when the edit box is white, but not when it is dark.

     

    My images in the imagelist have a background colour of clFuschia. For some reason I can't change the transparent color in the imageList's editor. It is disabled and clDefault is selected. 

     

    So I tried another approach, creating an imageList at runtime and adding masked images to it as follows:

    Function GetTransparentImageList(Owner:TComponent; SourceImages:TImageList; BMPIndex:Integer):TImageList;
      var
        bmp:TBitmap;
    begin
      bmp:=TBitmap.Create;
      try
        if SourceImages.GetBitmap(BMPIndex, bmp) then
          begin
            Result:=TImageList.Create(Owner);
            With Result do
              begin
    			DrawingStyle:=dsTransparent; //tried with and without for all three color options below
                AddMasked(bmp, clNone); //tried with clFuchsia and clDefault
              end;
          end;
      finally
        bmp.Free;
      end;
    end;

    Still the TButtonedEdit stubbornly shows the button image with a white background.

     

    Am I missing something?

×