Jump to content

Mark Williams

Members
  • Content Count

    282
  • Joined

  • Last visited

Everything posted by Mark Williams

  1. Thanks for the input. I'm not sure if you mean to cache date locally as in save them to a file on the local machine and load from there on start up. I don't think that's what your saying, but if so it would be something I would be keen to avoid. I would think it would be the same for medical apps which doubtless process highly sensitive data. I only download the data once by way of a query on start up, which then remains static (ie it doesn't refresh). So it's effectively cached locally from that point. I then periodically run a separate query to check for new/modified records and, if it finds any, it updates the main query. That's fine over fast internet or local connection with 30000 or so records and it's great to have everything stored locally as you say. On a slow connection it can considerably slow down start up and that's why I am thinking of having the ability to load incrementally only what's needed where there is a slow connection or a monster number of records. However, although downloading in chunks, I am not intending to jettison any data downloaded locally once the user scrolls away. It will be retained locally in case the user returns to the same location in the tree. Effectively, I'm trying to do something similar to what dbGrid does in OnDemand mode save that if you Ctrl+END it loads the entire table, which I am looking to avoid. I should probably delve into the DbGrids unit and see what it reveals.
  2. Mark Williams

    isFileInUSe

    Thanks. Will do.
  3. Mark Williams

    isFileInUSe

    Ok. Noted, but not really my issue. However, for some reason the function is now working as expected in 64 bit and I have no idea why. Thanks anyway.
  4. Thanks for posting all that. In terms of speed of loading and size of returned query etc I know there's a lot of sense in losing all the joins and displaying just the master data with some sort of panel display for the servant data which would update as you highlight each document. The problem with that is it necessitates stopping on each individual document to see what is happening with it, which could in itself become a major issue in terms of speed of scrolling through the tree ie having to arrow down instead of paging through it. I obviously need to give this a lot more thought, but thanks for food for that thought!
  5. Mark Williams

    isFileInUSe

    I copied the code from I believe Mike Lischke's site and found it worked exactly as I wanted in 32 bit. Never thought about the reason for the fileExists, but I suppose if the file doesn't exist then the answer to isFileInUse is "No". If it doesn't exist why bother creating it? So to my mind, it makes sense. However, it certainly isn't the cause of the problem. As for using THandle, please read my post to which you have responded where I say "Noted and thanks, but still returns false...". I was unaware of the problem with FileHdl in64 bit, but I have now changed it and it makes no difference whatsoever. In 64 bit when I run the function, it creates the file even though it is in use in another process and returns no error at least not on the CreateFile process. The only error it returns is on the call to CloseHandle, but that only happens in the IDE so, as I stated in my original post, I don't particularly care. Anyway, It's not hard to imagine that the reason CloseHandle bugs out is because the file is open in another process already and the handle should never have been created. But ignoring the CloseHandle problem, which would not exist if the function worked as expected (and as it does work in 32 bit), why doesn't it work in 64 bit? It would be helpful if someone else tried it in a 64 bit app. All you need is a button and an openDialog, the above function (obviously with HFileRes, changed to THandle) and the following code in your button's onClick: if OpenDialog1.Execute then if isFileInUse(openDialog1.FileName) then Showmessage('in use') else Showmessage('not in use');
  6. Mark Williams

    isFileInUSe

    Noted and thanks, but still returns false when the file is actually in use in 64 bit. Any idea why?
  7. Yes there is or rather it's a servant table that records all contact with a document (ie who, when, time spent and depth of their examination based on number of pages visited etc). You can already filter on what's been read and also on how well it's been read. You can also mark documents as thoroughly read. It already extracts the text from the documents in whatever format they come in - ocr/ pdf text extraction etc. These are added to a database. I agree OCR has greatly improved, but there's still that worry that something has been missed. Also, some documents are photographs or are handwritten (yes I know about ICR, but we could be talking doctor scribblings!). Even with a typed document, there may be that critical one word handwritten comment someone has added to a document, which just gets missed unless someone does the grunt work. You guessed right as to legal app. There are all sorts of ways the documents get indexed, including automatic textual analysis and manual grunt input. But assigning manual descriptions, dates etc to documents can mean analysing the whole document and well, you may as well do it as you trawl through the documents and keep a couple of egg cups on the desk to catch the eye bleed! Which brings me back to my long tree of documents and how best to populate it on the fly. Are there no VirtualTree users out there! I thought we numbered in the thousands?
  8. Thanks for the feedback. I do provide a search function with AND+OR functionality to search the document index. Also, all the document data gets text scraped/OCRd before going on to the database and there are then search facilities for the document text. All of this does a lot of the time get you where you want to be. But not always and this is an app managing documents where it is often critical that every document must have been looked at. So the app visually displays which documents have been seen and you can further prod into it by seeing who has looked at it, for how long, did look at the entire document etc. But unfortunately, sometimes the documents are really old and OCR badly and sometimes the descriptions of the documents are utterly meaningless "YZ123456789X-YV.pdf" for example. So I also want to give the user (which also includes me) the ability to scroll through and see what is there without having to do it in chunks. I've used it in a number of projects where there have been between 20K to 30K individual documents and between 500K and 1M actual pages. Loading the 30K or so records in one hit at start up is not too bad even on a remote internet server as long as you have a good internet connection your end. but I need to allow for slow connections and also to scale up for potentially much larger document numbers.
  9. My initial thoughts were: On start up fetch the data from the key id field into Query1 for all required rows. This should be very fast even for a large database. When data is needed for the tree it is fetched into Query2. I think this query would have to also contain the key id field so there is a duplication of data, but that would be relatively minor. Set no of virtual nodes in the tree to number of rows in Query1 When the tree needs to populate a node it looks to Query1 for the document id. With that id you would then use FindKey on Query2 to see if it already contains the data. If not, request the data from the database and add it to Query 2. Then pass the relevant RecNos from Query2 back to the tree. Possibly store the recNo from Query2 as node data so that cut out the small overhead of having to go back to Query1 each time a node requests data. I think this is pretty much in align with your thoughts. I am not too concerned with fast scrolling, I would just show a holding string ("Data populating...") or some such. I am more concerned with handling what happens when scrolling stops for a time. I previously implemented a thread which fired once the user stopped scrolling for a short period of time. It analysed scroll direction and populated above or below accordingly grabbing a few tree pages more than was necessary. This worked ok, but not great. More requests were being fired to the server than was really necessary and occasionally when the scrolling stopped nothing happened. These are just fine tuning and debugging issues. Before I plunge in and try to implement something like the above (which will be quite a bit of work), I was just looking for a steer from VirtualTree users who have encountered this issue as to whether this is a sensible approach or whether there is a better one, In particular, I need to know what are the best events to handle in order to implement this. My thoughts are OnScroll, but I have a feeling that there may be better suited events from the many that VirtualTree exposes. Many thanks for your input. As always highly appreciated.
  10. Mark Williams

    FieDAC Array DML and autoinc fields

    When using Array DML to insert new records is there a way of obtaining the data for any autoinc key fields for the inserted rows so that the key field of each newly inserted record can be updated in the relevant TFDQquery If using ApplyUpdates you merely set the AutoInc field of UpdateOptions and away you go. Can't see any obviously simple way of doing this with Array DML.
  11. It certainly handles data from local memory, but I am pretty sure it was also designed for displaying data from databases.
  12. Thanks for the tip. I've had a quick look on their site. I'm not sure how useful it would be to download their demo product without the source code. There are so many users of VirtualTree and, whilst Peter's observations are never to be taken lightly, I can't believe that my issue, which perhaps might be better summarised as "How best to use TVirtualTreeView virtually with a database", has not been addressed previously. But I cannot find any demos or help online. I suspect that I may be looking to the wrong events and properties for handling this. If so, I can't work out which are the right ones. OnInitNode may be, but it fires for each node and I need to query to get data for all visible nodes. I don't think my original approach was a million miles off, but it didn't perform as well as I hoped. It may be just a case of taking a long hard look at my code and seeing how it can be improved. However, before I do that I would really like some feedback from other virtualTreeview users who have already addressed this problem.
  13. I suppose so. I have used both approaches. If it were just three different forms I would be inclined to use panels. If more forms, a page control may make it visually simpler to manage at design time, with the common search box in a separate panel below the page control.
  14. Afraid to say "yes". It's used in a document database where the documents can be a random assortment and often poorly titled. Trying to sift through the documents by categories often doesn't work well and the only way to find what you want may be by way of a laborious scroll through the table (even item by item), tedious as it may sound. Your reply also seems to query the point of the virtual paradigm. Why have the ability to host millions of nodes if there is never any point in hosting them? I could load 250/500/1000 items a time and have a button "Get Next 1000", but I don't wish to do it that way. I would prefer (and believe it is also better from a usability stand point) to have a tree that purports to provide access to all the relevant documents simply by scrolling and allows the user to get to the point they want to be by scrolling through the tree quickly. Having to do so in chunks is in my view clunky and obstructive. VirtualTreeView, as I have always understood it is the remedy to such clunkiness: if only I could figure out how!
  15. The forms in the screenshot seem to be pretty simple with a common search box and buttons. Why don't you use a single form (AutoSize=true) with a series of aligned panels. The controls common to all form types on a client aligned panel and the optional controls on top aligned panels. You can then make the optional panels visible/invisible as required in onShow event. And after posting this suggestion I saw that you have lots of these forms. A host of panels would become rather cumbersome!
  16. Mark Williams

    FireDAC Postgres ByteA field

    The column definition is ALTER TABLE public.bundles Add COLUMN pages bytea; The query statement INSERT INTO bundles (pages) VALUES(:P)
  17. Mark Williams

    TFDBatchMove delete records

    I am using a TFDQuery component to load and edit records from a table using CachedUpdates. Records can be deleted from the table as well as edited and appended. The BatchMove component in dmDelete mode deletes all records in the query and not just those that have been flagged for deletion. Is there any way of using the BatchMove component so that it only deletes records where the updateStatus is usDeleted?
  18. Mark Williams

    TFDBatchMove delete records

    I was configuring the Batchmove as follows: FBatchMove := TFDBatchMove.Create(nil); FReader := TFDBatchMoveDataSetReader.Create(FBatchMove); FWriter := TFDBatchMoveSQLWriter.Create(FBatchMove); try FReader.DataSet:=FDQueryPG; FWriter.Connection:=FDConnectionPG; FWriter.TableName:='dummy'; FBatchMove.Mode := dmDelete; FBatchMove.Execute; I assumed (obviously incorrectly) that the batchmove component would only delete the records with an update status of usDeleted. I assume there is a way of configuring it for specific ideas, however, I have had so muh trouble with BatchMove that I have completely moved away from it now and using Array DML instead which is so much less hassle.
  19. I am trying to work with integer arrays in a Postgres integer[] field. I have read somewhere that with Postgres you need to use NestedDataSets to read the data rather than TArrayField and that seems to be correct. I can successfully write back to the dataset using TArrayField in an Insert Query. I can read from the integer[] field using TDataSet and nested fields. However, I cannot work out how to write to the TDataSetField. I have tried the following: Edit; TT:=TDataSetField(Fields[2]); for i:=0 to high(arr) do begin TT.NestedDataSet.Append; TT.NestedDataSet.Fields.Fields[0].Value:=arr[i]; end; The error I get on trying to post data edited as above is I tried to use "AsInteger" instead of value, but got the same error.
  20. Mark Williams

    TFDBatchMove delete records

    It's too slow for large updates.
  21. Mark Williams

    Mark post as answered

    Is there anyway of indicating that a post has been answered in this forum?
  22. Mark Williams

    FireDAC paramamania

    I am having difficulties with a parameterized query designed to be used repeatedly. It works first time but never a second. The query is fairly complicated so I prepare it on app initialization for repeated use, but it works just once and returns an empty dataset on every other occasion (even with the same parameter). If I reset the query text (FDQuery.SQL.Text := FDQuery.SQL.Text) it works as expected. That rather defeats the purpose of preparing the query in advance. Is it the case that parameters cannot be used in repeated queries?
  23. Mark Williams

    FireDAC paramamania

    So simple! Thanks.
  24. Mark Williams

    FireDAC clearDetails or EmptyDataSet

    I have been using ClearDetails to remove all records from TFDQuery components. However, I have experienced a problem today where a call to ClearDetails for a reusable query that is designed to return a single record is not removing that record. The description for this function states: Pretty much what I was expecting it to do, but (in this one case at least) it is not. I have switched to EmptyDataSet which does the job, but should I be updating all places where I use ClearDetails to update to EmptyDataSet?
  25. Mark Williams

    FireDac PostgreSQL and TBatchMove

    That's good to know. Thanks. Have you any experience of Batchmove and reverse inrements?
×