Jump to content
Quarks

How to use unit BufferedFileStream & FastCopy?

Recommended Posts

Hello,

 

   Can anyone give example codes in DELPHI when using unit BufferedFileStream & FastCopy for Copy paste a file?

My current Delphi newbie code :
 

{ FastCopyFile implementation }
function FastCopyFile(const ASourceFileName, ADestinationFileName: TFileName;
  CopyMode: TFastCopyFileMode;
  Callback: TFastCopyFileNormalCallback;
  Callback2: TFastCopyFileMethodCallback): Boolean; overload;
const
  BUFFER_SIZE = 524288; // 512KB blocks, change this to tune your speed

var
  Buffer: array of Byte;
  ASourceFile, ADestinationFile: THandle;
  FileSize, BytesRead, BytesWritten, BytesWritten2, TotalBytesWritten,
  CreationDisposition: Int64;
  CanContinue, CanContinueFlag: Boolean;
  BFS: TBaseCachedFileStream;

begin
  FileSize := 0;
  TotalBytesWritten := 0;
  CanContinue := True;
  SetLength(Buffer, BUFFER_SIZE);

  // Manage the Creation Disposition flag
  CreationDisposition := CREATE_ALWAYS;
  if CopyMode = fcfmAppend then
    CreationDisposition := OPEN_ALWAYS;

  // Opening the source file in read mode
  ASourceFile := OpenLongFileName(ASourceFileName, GENERIC_READ, 0, OPEN_EXISTING);
  BFS := TBaseCachedFileStream.Create(ASourceFileName);
  if ASourceFile <> 0 then
  try
    FileSize := FileSeek(ASourceFile, 0, FILE_END);
    FileSeek(ASourceFile, 0, FILE_BEGIN);

    // Opening the destination file in write mode (in create/append state)
    ADestinationFile := OpenLongFileName(ADestinationFileName, GENERIC_WRITE,
      FILE_SHARE_READ, CreationDisposition);

    if ADestinationFile <> 0 then
    try
      // If append mode, jump to the file end
      if CopyMode = fcfmAppend then
        FileSeek(ADestinationFile, 0, FILE_END);

      // For each blocks in the source file
      while CanContinue and (LongWord(FileSeek(ASourceFile, 0, FILE_CURRENT)) < FileSize) do
      begin

        // Reading from source
        BFS.Read(Buffer[0],BUFFER_SIZE);
        if BytesRead <> 0 then
        begin
          // Writing to destination
          BFS.Write(Buffer[0],BUFFER_SIZE);
          //WriteFile(ADestinationFile, Buffer[0], BytesRead, BytesWritten, nil);

          // Read/Write secure code block (e.g. for WiFi connections)
          if BytesWritten < BytesRead then
          begin
            //WriteFile(ADestinationFile, Buffer[BytesWritten], BytesRead - BytesWritten, BytesWritten2, nil);
            BFS.Write(Buffer[BytesWritten],BUFFER_SIZE);
            Inc(BytesWritten, BytesWritten2);
            if BytesWritten < BytesRead then
              RaiseLastOSError;
          end;

          // Notifying the caller for the current state
          Inc(TotalBytesWritten, BytesWritten);
          CanContinueFlag := True;
          if Assigned(Callback) then
            Callback(ASourceFileName, TotalBytesWritten, FileSize, CanContinueFlag);
          CanContinue := CanContinue and CanContinueFlag;
          if Assigned(Callback2) then
            Callback2(ASourceFileName, TotalBytesWritten, FileSize, CanContinueFlag);
          CanContinue := CanContinue and CanContinueFlag;
        end;

      end;

    finally
      CloseHandle(ADestinationFile);
    end;

  finally
    CloseHandle(ASourceFile);
  end;

  // Check if cancelled or not
  if not CanContinue then
    if FileExists(ADestinationFileName) then
      DeleteFile(ADestinationFileName);

  // Results (checking CanContinue flag isn't needed)
  Result := (FileSize <> 0) and (FileSize = TotalBytesWritten);
end

 I wanted to incorporate both of unit BufferedFileStream & FastCopy, any helps is appreciated. 

 

The problem with current unity FastCopy is, only able to copy and paste files below 4 GB in size. I wanted it to be able to copy practically unlimited filesize.

Edited by Quarks

Share this post


Link to post

Buffered file stream will get you no performance improvement in this case. It only has an advantage for reading and writing (many) small parts of a file, but you are using a buffer of 512 KBytes already.

Edited by dummzeuch

Share this post


Link to post

Ok, thanks for clarification, 

From what I have understanding, unit BufferedFileStream is supporting file larger than 4GB by using SetFilePointerEx, what I need is copy and paste a file code example by using BufferedFileStream.

 

Would you please help me?.

Share this post


Link to post
19 minutes ago, Quarks said:

Ok, thanks for clarification, 

From what I have understanding, unit BufferedFileStream is supporting file larger than 4GB by using SetFilePointerEx, what I need is copy and paste a file code example by using BufferedFileStream.

Why not using the Windows API CopyFileEx? As far as I know it has no size limitation. It also allows a callback for showing the progress and possibly cancelling the copy. If you need an example, have a look at TFileSystem.CopyFileWithProgress in my dzlib.

 

Or am I misunderstanding your requirements?

Share this post


Link to post

Thanks, my needs is only for the fastest way to copy and paste in Windows without any filesize limits. 

The aforementioned units seems claimed to be the 'fastest' than other ways. Wanted to try to make a free & simple app like TeraCopy.

So far only a TeraCopy alternative called "KillCopy" which is doing the fastest transfer.

Share this post


Link to post

Hi David, really glad to see you in here, I just wanted to test it out myself for copying/moving large files (5GB+).

I think using 'normal' windows's api has problems with sharing its transfer/bandwidth with other apps in windows.

I wanted to get sort of 'exclusive' transfer/bandwidth speed in Windows.

 

Could you please give me example code for using your BufferedFileStream unit?. Really appreciated for your helps.

Share this post


Link to post
1 hour ago, Quarks said:

I wanted to get sort of 'exclusive' transfer/bandwidth speed in Windows.

Nothing you have described here goes anyvway to achieving that. 

 

1 hour ago, Quarks said:

Could you please give me example code for using your BufferedFileStream unit?.

No, because it's not useful for the problem that you are trying to solve.

 

Avtually I suspect you are solving the wrong problem. My advice is that you elaborate on the problem you face rather than talking about potential solutions. 

Share this post


Link to post

Why don't you bench your FastCopy code against Windows copy on a 4 Gb file and see if it really worth trying?

Share this post


Link to post
On 8/21/2021 at 7:15 PM, David Heffernan said:

What's wrong with asking the system to copy a file? 100% you should not be using this buffered file stream code. 

I avoid it for the following reasons. 

First of all, the CopyFileEx API documentation does not specify if the source file is opened for shared access or not. So I don't know how it behaves when multiple users are accessing the file and if that behavior may change in future.   

 

Secondly, when I copy a file somewhere, I emphatically want it to inherit the access properties of the target directory. Otherwise the access rights become unpredictable.   Unfortunately, the CopyFileEx API documentation says "The security resource properties (ATTRIBUTE_SECURITY_INFORMATION) for the existing file are copied to the new file".  I really don't want that to happen.

 

  • Thanks 1

Share this post


Link to post
On 8/22/2021 at 5:16 AM, Quarks said:

I think using 'normal' windows's api has problems with sharing its transfer/bandwidth with other apps in windows.

I wanted to get sort of 'exclusive' transfer/bandwidth speed in Windows.

Are you doing local copying or copying across the network?
 

 

1 hour ago, A.M. Hoornweg said:

does not specify if the source file is opened for shared access or not

From empirical evidence, it seems to open files with read only, deny none. I've never experienced a sharing violation, and I have multiple concurrent clients that pull down changed .exe files from a central share.
 

Share this post


Link to post
3 hours ago, A.M. Hoornweg said:

I avoid it for the following reasons.

Hm, but you are not @Quarks, so your reasons don't count in this context.

Share this post


Link to post
7 minutes ago, dummzeuch said:

Hm, but you are not @Quarks, so your reasons don't count in this context.

David asked "What's wrong with".   So I pointed out some.  OP may have other reasons, admittedly.     🙂

  • Like 1

Share this post


Link to post
8 hours ago, Fr0sT.Brutal said:

Why don't you bench your FastCopy code against Windows copy on a 4 Gb file and see if it really worth trying?

I did, it's pretty the same as FastCopy transfer performance. Except when I am using a TeraCopy alternative called "KillCopy".

My test KillCopy able to exclusively able to use the available transfer bandwidth for local copy, so it's managed to saturated other Windows's apps bandwidth needs to local transfer. This is problematic for daily usage computer but will be useful when we only doing backup/copy/move files.

From Detect It Easy, perhaps KillCopy is using Delphi as the language, hence I pursue the solution in here, because the KillCopy seems to be abandonment since 2006 ago. Attempt to locate the author has failed, something big must be happened to the author.

But somehow it managed to working in Windows 10.

 

The benchmark result (local copy, partition to partition, GPT, HDD USB 3.1 external enclosure, consist of pretty much big files(5GB++)) :

Killcopy : 126 MBps
TeraCopy : 90 MBps
Ultracopier : 50 MBps
 
When we need to transfer big files fast, few extra bytes/kilobytes is really make a differences.
 

 

5 hours ago, Lars Fosdal said:

Are you doing local copying or copying across the network?
 

 

From empirical evidence, it seems to open files with read only, deny none. I've never experienced a sharing violation, and I have multiple concurrent clients that pull down changed .exe files from a central share.
 

 

Local copy, haven't tried with network shares.

Edited by Quarks

Share this post


Link to post

A random thought I had - what if you copied from one memory mapped file onto another, with multiple threads?  

I am not even sure that is possible, and it might be a problem with so large files, due to disk storage fragmentation.

 

Edit: Not sure if it is possible to do memory mapping for a removable USB drive.

 

Perhaps you simply need a faster USB unit 😉
https://www.tripplite.com/products/usb-connectivity-types-standards

 

 

As for copying across network drive, this is where CopyFileEx really does a lot of low level problem solving for you.

Share this post


Link to post
50 minutes ago, Lars Fosdal said:

A random thought I had - what if you copied from one memory mapped file onto another, with multiple threads?

I doubt this will cause any gain. Contrary, HDD will have to reposition the writing head every time.

Share this post


Link to post

Physical heads are increasingly rare these days.
My speculation is that the low level DMA mechanisms in the OS might outperform the higher level file routines - but it is pure speculation.

Share this post


Link to post
7 hours ago, Lars Fosdal said:

Physical heads are increasingly rare these days.

?? How magnetic HDDs could work without heads?

Share this post


Link to post
2 hours ago, Fr0sT.Brutal said:

?? How magnetic HDDs could work without heads?

HDD is so old. All the new fancy kids are using SSD.

Share this post


Link to post

To be fair, one of the three disk drives in my laptop is rotating.
I'm oldschool (and very large SSDs are still too expensive).

Share this post


Link to post
51 minutes ago, Lars Fosdal said:

I'm oldschool (and very large SSDs are still too expensive).

I am wondering why that is still the case, from my prediction SSD should have been breaking even much earlier.

https://blocksandfiles.com/2021/01/25/wikibon-ssds-vs-hard-drives-wrights-law/

Accoding to this blog, we still have to wait some more time.

 

On the other hand, I think the SSD/HDD pricing has many political components in it, so both parties try to keep prices high as long as possible, of course.

 

Share this post


Link to post
On 8/24/2021 at 8:25 PM, Lajos Juhász said:

HDD is so old. All the new fancy kids are using SSD.

1 Tb of SSD is still too expensive

Share this post


Link to post
23 minutes ago, Fr0sT.Brutal said:

1 Tb of SSD is still too expensive

Define too expensive. (WD Purple 3,5" 1TB costs about 47.58€ and it's only at 5400 RPM, the faster WD Black 3,5" 1TB  costs 77,41€. For SSD we can take for example Samsung 870 QVO 1TB SSD (MZ-77Q1T0BW, SATA 6 Gb/s) costs 106.09€ for me it's not a huge difference.)

Share this post


Link to post

It's impossible to say that SSD is too expensive. It depends entirely on the usage. SSD suits usage that favours speed over volume. HDD is more suited to usage that favours volume over speed.

Share this post


Link to post
27 minutes ago, Fr0sT.Brutal said:

1 Tb of SSD is still too expensive

Not sure how do you calculate that ?

 

I just checked prices for a 2.5" SSD vs. HDD 1TB.

Lets say 1TB SSD roughly 180 EUR vs. HDD roughly 65 EUR,

which makes an SSD up-cost of 115 EUR.

 

If I only assume the SSD vs. HDD brings minimum 5 min. / day of saved working (compile/loading/saving) time,

then this summarize to approx. 5 min * 200 (active working) days / year = 1000 min / year = 16.7 hours / year of saved working time.

 

Usually I would expect you have hourly working cost of >= 25 EUR at minimum,
this would mean minimum working cost savings of  417 EUR / year vs. SSD cost-up of 115 EUR.

 

This is only for one year, usually I would calculate the lifetime of >= 3 years, so the ratio gets better every year.

It really doesn't make sense to save money on the wrong side.

 

 

 

 

 

 

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×