Jump to content
Jasonjac2

If TFDBatchmove suitable for one way syncing data between 2 sqlite databases?

Recommended Posts

I have to periodically push sync (Make destination the same as source) between 2 databases (Eventually FB to SQLite, but for now sqlite to sqlite).  I have sync'd datasets a number of times over my programming career and have some specific to the use case and some more generic solutions.

 

As the datasets are relatively small I decided to try using TFDBatchmove in the hope it might not be super efficient, but would be a case of hooking up some components and pressing the Go button :-).  A day later and I still haven't got it working.

I intended building from a very trivial example.  I also had a look at the samples for D12.1, which in a classic sample fashion wasn't super clear to read the code.

 

So Q1) is it suitable to us use the batchmove components in this way?

Current test: 2 identical tables in sqlite, run it once, it fills the table (that actually works)

Modify the destination table (e.g. blank a few field values)

Rerun and initially I get an exception "Not Implemented"

By trial and error I add in an event handler for OnFindDestRecord - which I thought I wouldn't need as I am using TFDTable, the sqlite table has a primary key and the field structures are identical.

Now I have hand coded the locate from source to destination it gives me: 

Project FDBatchMoveScratch.exe raised exception class EFDDBEngineException with message '[FireDAC][Phys][SQLite]-312. Exact update affected [0] rows, while [1] was requested'.

Project FDBatchMoveScratch.exe raised exception class EFDException with message '[FireDAC][DApt]-400. Update command updated [0] instead of [1] record. Possible reasons: update table does not have PK or row identifier, record has been changed/deleted by another user'.

Adding in event handlers for all of the batch move events, to try and work out what is going on - it is trying to update the 1st row, eventhough the row is identical.

 

So I am clearly missing something (or a lot of things).

 

For info - I have 2 FDConnections, an FDBatchMove, in fact to avoid ambiguity, I've pasted the objects below .

 

Q2) Can anyone point me to a decent walk through of setting up one way sync and resync.

 

I could have hand coded a compare, update, insert, delete routine by hand in this time :-(.

 

As always, thank you for your time with this. 

 

object FDBatchMove1: TFDBatchMove
  Reader = FDBatchMoveDataSetReader1
  Writer = FDBatchMoveDataSetWriter1
  Mode = dmAppendUpdate
  Options = [poIdentityInsert, poSkipUnmatchedDestFields, poUseTransactions]
  Mappings = <>
  LogFileAction = laAppend
  LogFileName = 'Data.log'
  OnProgress = FDBatchMove1Progress
  OnFindDestRecord = FDBatchMove1FindDestRecord
  OnWriteValue = FDBatchMove1WriteValue
  OnWriteRecord = FDBatchMove1WriteRecord
  Left = 384
  Top = 216
end
object FDBatchMoveDataSetReader1: TFDBatchMoveDataSetReader
  DataSet = tblsrcWebUsers
  Left = 520
  Top = 120
end
object FDBatchMoveDataSetWriter1: TFDBatchMoveDataSetWriter
  Direct = True
  DataSet = tbldestWebUsers
  Left = 512
  Top = 296
end
object tbldestWebUsers: TFDTable
  IndexFieldNames = 'ID'
  Connection = FDConnectionSQLiteDest
  ResourceOptions.AssignedValues = [rvEscapeExpand]
  TableName = 'WEB_USERS'
  Left = 376
  Top = 288
  object tbldestWebUsersID: TIntegerField
    FieldName = 'ID'
    Origin = 'ID'
    ProviderFlags = [pfInUpdate, pfInWhere, pfInKey]
    Required = True
  end
  object tbldestWebUsersUSERNAME: TStringField
    FieldName = 'USERNAME'
    Origin = 'USERNAME'
    Required = True
    Size = 50
  end
  object tbldestWebUsersROLE: TStringField
    FieldName = 'ROLE'
    Origin = 'ROLE'
    Size = 50
  end
  object tbldestWebUsersIS_ACTIVE: TIntegerField
    FieldName = 'IS_ACTIVE'
    Origin = 'IS_ACTIVE'
  end
  object tbldestWebUsersCOMPANY_NAME: TStringField
    FieldName = 'COMPANY_NAME'
    Origin = 'COMPANY_NAME'
    Size = 50
  end
end


 

Share this post


Link to post

OK, things move on. I needed to set the  FDBatchMoveDataSetWriter1.direct to false and it worked.

"FDBatchMoveDataSetWriter1.direct to false.

Looking at the help on TFDBatchMoveDataSetWriter.direct I can't see why it wouldn't work with sqlite.

 

Now I have it working for this use case, I stripped out some of the random changes I made to try and get it working (removed the FDBatchMove1FindDestRecord), removed the persistent fields etc.

 

I took another look at the sample and was definitely none the wiser about merging data.

So I inch further on, still asking for any decent blogs / walkthroughs / gems of knowledge.

Things I know that are coming my way if I continue down this path:

- It doesn't help me decide if there are any records that need removing.  Normally I use a "ladder" approach, organising the source and destination in such a way that if all rows are in both, then you can just .next on each dataset, but if there is a row in dest where the calculated PK is < the current calculated source PK, then it needs to be deleted.

-- anyone dealt with this in the TFDBatchMove world?

- Currently this is all sqlite to sqlite with identical field structures just to build the example, moving on, I it will need to cater for pulling a joined SQL from the src and pushing into a denormalised table in sqlite.

TIA 

Regards,

Jason

Share this post


Link to post

...We make more progress forward and some steps back...

Using Calculated Fields in your Source Dataset - YOU CAN'T

It appears you can't use calculated fields in the source table.  The reason I want to is: There is a field that is a constants and I didn't want to have to include it in the query.

 

TFDBatchMove.mappings[?].SourceExpression

I thought you would be able to pull fields from the source dataset, but you can't do that, unless they are actually cited in the mappings.  So I have Line2, Line2...Postcode and need to convert that into Address (those items concatenated with ~s.  I thought I could this with Line1 etc in the source query, but as it doesn't appear in the mappings, you get a filed not found (Line1||"~"||Line2).

 

 

Share this post


Link to post

Is it not possible in sqlite to create a INSERT, UPDATE, DELETE trigger on the source table to sync it to the destination table?

 

Share this post


Link to post
  On 3/4/2025 at 9:26 AM, Die Holländer said:

Is it not possible in sqlite to create a INSERT, UPDATE, DELETE trigger on the source table to sync it to the destination table?

 

Thanks for replying, I started to think I was the only Batchmover :-). 

I am not that familiar with SQLite to know if you can use triggers between 2 separate databases.

Regardless, in my actually use case the SQLite  <> SQLite was a stepping stone to the actual source, which would be firebird (an older version of FB).

 

Things are moving on. I wish I hadn't started with BatchMove as using something i have used before would have required a lot less head scratching, but it has been a good learning curve for TFDBatchMove. 

 

Other Gotchas.... If you don't assign a sourceFieldName or SourceExpression, then in FDBatchMoveWebUsersWriteValue for that field, you get whatever the previous value was,, e.g. I need to put a value in for a field - Address, where address is going to be built in FDBatchMoveWebUsersWriteValue.  There is no sourcefield to base it on, so the Mapping just has a destination field.  When the  FDBatchMoveWebUsersWriteValue gets invoked for the address, AValue actually has the phone number in it , which was the previous call to the event handler - a bit messy  think.

Share this post


Link to post

The Batchmove is mostly used when you have to dump a large dataset into an table once. I think what you want is

to sync tables, so when something is changed in the one table there will be the same change in

the other table automatically. First you make the tables the same with the data and then the trigger does the rest.

The trigger must also work for two databases, especially when they are on the same database server.

I'm also not familiar with SQLite but do a Goolge search on like "SQLite INSERT, UPDATE, DELETE trigger two tables"

I can DM you an example of a MSSQL trigger that does this if you wish.

Share this post


Link to post
  On 3/5/2025 at 7:07 AM, Die Holländer said:

The Batchmove is mostly used when you have to dump a large dataset into an table once. I think what you want is

Yes, I can see that, as well as transforming from one format to another (It seems to be used a lot to TDataset -> JSON etc). I thought when I saw update and delete it would be a more general purpose component. 

 

Thanks for the offer on the trigger, but I don't think I would use it at the moment.  I have written replication engines for firebird and happy with triggers.  I was hoping for a simple (to develop) engine, but I can see  me moving away from the batchmove component, especially as the records grow - the Find Matching Record is so poor in terms of me not being able to .next the target dataset, that when I get to thousands of records , it is going to be horribly slow.

I guess when I have done this working prototype, I'll go back and make my "zipper / ladder" algorithm a bit more componentised and use that instead.

 

It's a shame as the component does a lot.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×