Jump to content
bazzer747

FireDac Batchmove

Recommended Posts

Hi,

I'm just testing Batchmove to backup my tables from a Live MSSQL server to a backup MSSQL server.. So I'm using a SQLReader & SQLWriter with the BatchMove component.

 

However, when I execute the BatchMove  it only copies over 100 records. I see no property that says 'copy every record'. I do see a 'CommitCount' property set at 100, and if I change that to, say, 200, I get 200 records copied. But reading about this property indicates it is about doing transactions in batches of this number to avoid possible limits. So I would have thought it would do 100, then the next 100 and so on to the end.

 

Am I missing a property here which will default to copying the whole table over.  I'm finding the FireDac DocWiki on this quite difficult to find answers like this.

Share this post


Link to post

Have you got the FetchOptions - Mode property set to fmAll in the TFDConnection you are using for your batch move?

Share this post


Link to post
Guest

hi @bazzer747

 

see if the video Pawel (MVP Embarcadero)  help you

 

FireDAC ETL = Extract, Transform, Load - Pawel Glowacki

 

hug

Share this post


Link to post

Have you got the fetch options for your data source (typically a TFDQuery) set to AutoFetchAll = afAll ?

I have got the batch move components to work - but, like all FireDAC, getting the settings exactly right is a bit tricky!

 

Share this post


Link to post

Hi Roger,

Yes, the TFDQuery, FetchOptions, AutoFetchAll is set to afAll. It looks like it  gets this value from the TFDConnection, like the RowSetSize as I didn't have to change anything.

 

And on the Pawel video, yes I looked at that to set my test up. I'll start it from the start again, though, as I jumped in n to just the BatchMoive bbits.

 

Share this post


Link to post

Something else which seems amiss :-): What I really want to do is make backups of tables on a Live server and write them to tables on a different, backup server. Ignoring for a second the above issue, when I set up the SQLReader and SQLWriter to use the same TFDConnection, hence the same Server, it woks OK. When I set the SQLReader to point to the Live Server and the SQLWriter to a different backup Server I get an Access Violation!  

 

I'll start a totally new/clean project just to try this configuration, as I can't see why what I am trying here shouldn't work, it just makes better sense when making backups for them to on a totally different Server.

Share this post


Link to post
4 minutes ago, bazzer747 said:

What I really want to do is make backups of tables on a Live server and write them to tables on a different, backup server

For this you should configure replication on the server IMHO that works best.

  • Like 1

Share this post


Link to post

Hi, Thanks for suggestions, but I'm talking about MSSQL Servers that are available to me from a Web hosting company, and this functionality isn't available to me. When I ask them about making backups they say I need to manage that functionality myself. They ensure their Servers are backed up, rest is up to Users. Maybe it's different with other hosting companies..

Share this post


Link to post
Guest

hi @bazzer747

 

the FireDAC have itself hierachy as any other suite, then, stay attent about this:

  • no talking about "classes" code ok?
  • FDManager is your "beggin" about set up the FireDAC components
  • Quote

    This class is responsible for connection definitions and connections management.

    Use TFDCustomManager to manage connection definitions and connection objects. Consider accessing the singleton instance of TFDManager through the FDManager function instead of explicitly creating it.

     

image.thumb.png.e33355f9804d9036d4db924556c08b99.png

 

 

hug

Edited by Guest

Share this post


Link to post

Is it possible to setup the second server as a trusted server on the first? If so, you can use sp_AddServer and sp_AddSynonym to create "virtual" remote tables and do all your copying in SQL from the primary server. 

 

If not, ApexSQL Data Diff is great for partial replications. If the tables have a "Time Changed" field, then you would be able to limit the amount of data that you need to pump on each sync. 

Share this post


Link to post
4 hours ago, bazzer747 said:

I'm talking about MSSQL Servers that are available to me from a Web hosting company,

I'd still use jobs for creating compressed backups and copy them to another location.

I don't know how this batchmove works but I'm sure it won't lock anything nor will it transfer transaction logs -if any-, so high chances that the data won't stay consistent.

 

Share this post


Link to post

This is all interesting and helpfu - but is drifting off topic ! 

Original question is still not answered (how to make it work!)

Then @Bazzer747 asks the second question about using batch move to a different TFDConnection. (I wonder if this should be in another thread on this forum). 

This also should work. 

"work" = move all the records in the source dataset to the destination dataset.

Share this post


Link to post

Hi,

 

Thankyou for all the responses. This always encourages me to continue testing and following advice to get things working.

 

As mentioned above, I have now setup a new project whose sole purpose is to copy records from a table on one server to a 'backup' table on a different server.

 

I started with the TFDManager as recommended above (although I'm led to believe one is created in the background anyway). And it does work. All records are copied over successfully. So the problem I'm having in my original project isn't solved, but at least I can't suspect the components any longer, it must be how they are set up. Some more checking here!

 

I'm not into spending on 3rd party tools to do what FireDac promises to do, I'm not into that world, mores the pity.

Share this post


Link to post

Guys, i'm having trouble using TBatchMove in my project. I'm migrating data from an Firebird database to a PostgreSQL database. I thought to retrive the data from de FB database without the primary key fields, because the destination tables in the PG database already have a primary key setted as serial, therefore the DB would create the indexes i needed. However, TBatchMove doesn't seem to work well with implicit values, everytime i try to execute the command, it tries to pull the primary key value from the closest field.  When i added mappings to the component... Well, it started to try sending null values to the table. I am getting desperate, any solutions for this problem?

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×