Jump to content

Mark Williams

Members
  • Content Count

    274
  • Joined

  • Last visited

Posts posted by Mark Williams


  1. 33 minutes ago, Dany Marmur said:

    DevExpress has something they call "Server Mode" wish is a "GridMode" using "refined" queries. I.e. extended "DataProviders".

    It is rather convoluted IMHO but it works for specific implementations (it's not a one-for-all). You could check their demos for inspiration.

     

    Thanks for the tip. I've had a quick look on their site.  I'm not sure how useful it would be to download their demo product without the source code. There are so many users of VirtualTree and, whilst Peter's observations are never to be taken lightly, I can't believe that my issue, which perhaps might be better summarised as "How best to use TVirtualTreeView virtually with a database", has not been addressed previously. But I cannot find any demos or help online. I suspect that I may be looking to the wrong events and properties for handling this. If so, I can't work out which are the right ones. OnInitNode may be, but it fires for each node and I need to query to get data for all visible nodes.

     

    I don't think my original approach was a million miles off, but it didn't perform as well as I hoped. It may be just a case of taking a long hard look at my code and seeing how it can be improved. However, before I do that I would really like some feedback from other virtualTreeview users who have already addressed this problem.


  2. Quote
    13 hours ago, PeterBelow said:

    has the poster ever actually tried to work with a list that large?

     

    Afraid to say "yes". It's used in a document database where the documents can be a random assortment and often poorly titled. Trying to sift through the documents by categories often doesn't work well and the only way to find what you want may be by way of a laborious scroll through the table (even item by item), tedious as it may sound.

     

    Your reply also seems to query the point of the virtual paradigm. Why have the ability to host millions of nodes if there is never any point in hosting them?

     

    I could load 250/500/1000 items a time and have a button "Get Next 1000", but I don't wish to do it that way. I would prefer (and believe it is also better from a usability stand point) to have a tree that purports to provide access to all the relevant documents simply by scrolling and allows the user to get to the point they want to be by scrolling through the tree quickly. Having to do so in chunks is in my view clunky and obstructive. VirtualTreeView, as I have always understood it is the remedy to such clunkiness: if only I could figure out how!


  3. The forms in the screenshot seem to be pretty simple with a common search box and buttons. Why don't you use a single form (AutoSize=true) with a series of aligned panels. The controls common to all form types on a client aligned panel and the optional controls on top aligned panels. You can then make the optional panels visible/invisible as required in onShow event.

     

    And after posting this suggestion I saw that you have lots of these forms. A host of panels would become rather cumbersome!


  4. I am using TvirtualTree for data display where there can be large numbers of records in a database table (upwards of 30,000). 

    Given the virtual paradigm, it obviously makes sense to query for the data only as needed. But I am struggling to find the best way of dealing with this.

    It is not ideal to query for the data simply on the firing of the onScroll event. It would just mean that scrolling would become gruesomely slow.

    I did try to implement something based on the following design:

    • Use the OnScroll event to fire a timer with an appropriate interval to indicate the user has stopped scrolling for a spell (eg 500ms).
    • Record the topNode at start of timer.
    • After timer interval check if TopNode has changed. 
    • If TopNode hasn't changed, query whether data already obtained for the nodes within the tree's visible area.
    • If data needed, obtain it via a thread so as not to inhibit further scrolling of the tree.
    • Populate the tree with the new data assuming user hasn't scrolled on since the thread executed.

    I managed to get this working after a fashion. but it didn't strike me an elegant solution (nor in fact was it terribly reliable probably due to y design).

    I am about to revisit this again. It occurs to me that as the virtual paradigm is key to VirtualTree this problem must have been considered and resolved as part of the tree's design so as to greatly simplify the process. If that is so, I can't find out what the solution is meant to be and would be grateful for any pointers.

    If there is no easy solution as part of the tree design I would be grateful for a steer as to other ways of reliably and (hopefully) simply resolving this issue.


  5. I am using Array DML to post data to a postgres table. One of the fields is of type bytea.

     

    I have tried setting the datatype for the relevant query parameter toftBlob. This fails with the error:

    Quote

    [FireDAC][Phys][PG]-352. Object value for [PAGES] parameter of [ftBlob] type is not supported

    I have tried setting the datatype for the query parameter to the same TFieldType as the field in the FireDAC dataset. I get the same error, so it would seem the field is a blob field.

     

    Code below:

    Query2,Params[3].DataType:=ftBlob;
    Query2Params[3].AsStream:=CreateBlobStream(Fields[ages], bmRead);

     


  6. I was configuring the Batchmove as follows:

     

       FBatchMove := TFDBatchMove.Create(nil);
       FReader := TFDBatchMoveDataSetReader.Create(FBatchMove);
       FWriter := TFDBatchMoveSQLWriter.Create(FBatchMove);
       try
         FReader.DataSet:=FDQueryPG;
         FWriter.Connection:=FDConnectionPG;
         FWriter.TableName:='dummy';
         FBatchMove.Mode := dmDelete;
         FBatchMove.Execute;

    I assumed (obviously incorrectly) that the batchmove component would only delete the records with an update status of usDeleted. I assume there is a way of configuring it for specific ideas, however, I have had so muh trouble with BatchMove that I have completely moved away from it now and using Array DML instead which is so much less hassle.


  7. I am trying to work with integer arrays in a Postgres integer[] field.

     

    I have read somewhere that with Postgres you need to use NestedDataSets to read the data rather than TArrayField and that seems to be correct.

     

    I can successfully write back to the dataset using TArrayField in an Insert Query.

     

    I can read from the integer[] field using TDataSet and nested fields. 

     

    However, I cannot work out how to write to the TDataSetField. I have tried the following:

     

    Edit;
     TT:=TDataSetField(Fields[2]);
     	for i:=0 to high(arr) do
          begin
            TT.NestedDataSet.Append;
            TT.NestedDataSet.Fields.Fields[0].Value:=arr[i];
          end;        

    The error I get on trying to post data edited as above is

    Quote

    Cannot read data from or write data to the invariant column [page_array]. Hint: use properties and methods, like a NestedTable.

    I tried to use "AsInteger" instead of value, but got the same error.


  8. I am using a TFDQuery component to load and edit records from a table using CachedUpdates.

     

    Records can be deleted from the table as well as edited and appended.

     

    The BatchMove component in dmDelete mode deletes all records in the query and not just those that have been flagged for deletion.

     

    Is there any way of using the BatchMove component so that it only deletes records where the updateStatus is usDeleted?


  9. I am having difficulties with a parameterized query designed to be used repeatedly. It works first time but never a second.

     

    The query is fairly complicated so I prepare it on app initialization for repeated use, but it works just once and returns an empty dataset on every other occasion (even with the same parameter).

     

    If I reset the query text (FDQuery.SQL.Text := FDQuery.SQL.Text) it works as expected.

     

    That rather defeats the purpose of preparing the query in advance. Is it the case that parameters cannot be used in repeated queries?


  10. I have been using ClearDetails to remove all records from TFDQuery components.

     

    However, I have experienced a problem today where a call to ClearDetails for a reusable query that is designed to return a single record is not removing that record. The description for this function states:

     

    Quote

    Clears all the content in the dataset.

    ClearDetails iterates along rows and columns in order to clear the content.

    Pretty much what I was expecting it to do, but (in this one case at least) it is not. I have switched to EmptyDataSet which does the job, but should I be updating all places where I use ClearDetails to update to EmptyDataSet?


  11. Has anyone been able to get the BatchMove component to work correctly with PostgreSQL in AppendUpdate mode where there is an auto inc key field?

     

    My BatchMove component creation code is below.

     

    In addition, either the FDConnectionPG ExtendedMetaData param is set to true or the FDQueryPG updateOptions.AutoIncFields is set. Either way produces the same result, although I understand there is an efficiency hit with use of ExtendedMetaData.

     

    The problem is that whilst new rows get added to the table and my auto inc key field value gets set, it is running backwards ie -1, -2, -3 etc.  Am I missing something or is this a bug?

     var
        FBatchMove: TFDBatchMove;
        FReader: TFDBatchMoveDataSetReader;
        FWriter: TFDBatchMoveSQLWriter;
        F:TField;
    begin
       FBatchMove := TFDBatchMove.Create(nil);
       FReader := TFDBatchMoveDataSetReader.Create(FBatchMove);
       FWriter := TFDBatchMoveSQLWriter.Create(FBatchMove);
       try
         FReader.DataSet:=FDQueryPG;
         FWriter.Connection:=FDConnectionPG;
         FWriter.TableName:='dummy';
         FBatchMove.CommitCount:=1000;
         FBatchMove.Mode := dmAppendUpdate;
         FBatchMove.options:=FBatchmove.Options+[poIdentityInsert];
         FBatchMove.Execute;
      finally
            FWriter.Free;
            FReader.Free;
            FBatchMove.Free;
      end;


     

     

     

×