I’m a swiss Master student in Computer Science. I’m very interested in C++, open source projects, Linux, Intel Assembly and Agile. I'm currently working on Eddi, a new programming language that I created to improve my skills in C++. I've also worked a lot on Java technologies (Sprint, Osgi, Play!, ...), but I'm not currently working with Java anymore. Baptiste is a DZone MVB and is not an employee of DZone and has posted 51 posts at DZone. You can read more from them at their website. View Full User Profile

File Copy in Java – Benchmark

08.08.2010
| 20925 views |
  • submit to reddit

Yesterday I wondered if the copyFile method in JTheque Utils was the best method or if I need to change. So I decided to do a benchmark.

So I searched all the methods to copy a File in Java, even the bad ones and found the following methods :

  1. Naive Streams Copy : Open two streams, one to read, one to write and transfer the content byte by byte.
  2. Naive Readers Copy : Open two readers, one to read, one to write and transfer the content character by character.
  3. Buffered Streams Copy : Same as the first but using buffered streams instead of simple streams.
  4. Buffered Readers Copy : Same as the second but using buffered readers instead of simple readers.
  5. Custom Buffer Stream Copy : Same as the first but reading the file not byte by byte but using a simple byte array as buffer.
  6. Custom Buffer Reader Copy : Same as the fifth but using a Reader instead of a stream.
  7. Custom Buffer Buffered Stream Copy : Same as the fifth but using buffered streams.
  8. Custom Buffer Buffered Reader Copy : Same as the sixth but using buffered readers.
  9. NIO Buffer Copy : Using NIO Channel and using a ByteBuffer to make the transfer.
  10. NIO Transfer Copy : Using NIO Channel and direct transfer from one channel to other.

I think, this is the ten principal methods to copy a file to another file. The different methods are available at the end of the post. Pay attention that the methods with Readers only works with text files because Readers are using character by character reading so it doesn’t work on a binary file like an image. Here I used a buffer size of 4096 bytes. Of course, use a higher value improve the performances of custom buffer strategies.

For the benchmark, I made the tests using different files.

  1. Little file (5 KB)
  2. Medium file (50 KB)
  3. Big file (5 MB)
  4. Fat file (50 MB)

And I made the tests first using text files and then using binary files. The source file is not on the same hard disk as the target file.

I used a benchmark framework, described here, to make the tests of all the methods. The tests have been made on my personal computer (Ubuntu 10.04 64 bits, Intel Core 2 Duo 3.16 GHz, 6 Go DDR2, SATA Hard Disks).

And after a long time of bench, here are the results :

Little Text File - All results

Little Text File - All results

We see that the method with a simple stream (Naive Streams) is from far the slowest followed by the simple readers methods (Naive Readers). The readers method is a lot faster than the simple stream because FileReader use a buffer internally. To see what happens to the other, here are the same graph but without the first two methods :

Little Text File - Best results

Little Text File - Best results

The best two versions are the Buffered Streams and Buffered Readers. Here this is because the buffered streams and readers can write the file in only one operation. Here the times are in microseconds, so there is really little differences between the methods. So the results are not really relevant.

Now, let’s test with a bigger file.

Medium Text File

Medium Text File

We can see that the versions with the Readers are a little slower than the version with the streams. This is because Readers works on character and for every read() operation, a char conversion must be made, and the same conversion must be made on the other side.

Another observation is that the custom buffer strategy is faster than the buffering of the streams and than using custom buffer with a buffered stream or a single stream doesn’t change anything. The same observation can be made using the custom buffer using readers, it’s the same with buffered readers or not. This is logical, because with custom buffer we made 4096 (size of the buffer) times less invocations to the read method and because we ask for a complete buffer we have not a lot of I/O operations. So the buffer of the streams (or the readers) is not useful here. The NIO buffer strategy is almost equivalent to custom buffer. And the direct transfer using NIO is here slower than the custom buffer methods. I think this is because here the cost of invoking native methods in the operating system level is higher than simply the cost of making the file copy.

Big Text File - All results

Big Text File - All results

Here we see that the Naive Readers shows its limit when the file size if growing. So let’s concentrate us on the best methods only, namely, remove the Naive Readers :

Big Text File - Best results

Big Text File - Best results

Here, it’s now clear that the custom buffer strategy is a better than the simple buffered streams or readers and that using custom buffer and buffered streams is really useful for bigger files. The Custom Buffer Readers method is better than Custom Buffer Streams because FileReader use a buffer internally.

And now, continue with a bigger file :

Fat Text File Results

Fat Text File Results

You can see that it doesn’t take 500 ms to copy a 50 MB file using the custom buffer strategy and that it even doesn’t take 400 ms with the NIO Transfer method. Really quick isn’t it ? We can see that for a big file, the NIO Transfer start to show an advantage, we’ll better see that in the binary file benchmarks. We will directly start with a big file (5 MB) for this benchmark :

Big Binary File Results

Big Binary File Results

So we can make the same conclusion as for the text files, of course, the buffered streams methods is not fast. The other methods are really close.

Fat Binary File Results

Fat Binary File Results

We see here again that the NIO Transfer is gaining advantages more the files is bigger.

And just for the pleasure, a great file (1.3 GB) :

Enormous Binary File Results

Enormous Binary File Results

We see that all the methods are really close, but the NIO Transfer method has an advantage of 500 ms. It’s not negligible.

Conclusion

In conclusion, the NIO Transfer method is the best one for big files but it’s not the fastest for little files (< 5 MB). But the custom buffer strategy (and the NIO Buffer too) are also really fast methods to take files. So perhaps, the best method is a method that make a custom buffer strategy on the little files and a NIO Transfer on the big ones. But it will be interesting to also make the tests on an other computer and operating system.

We can take several rules from this benchmark :

  1. Never made a copy of file byte by byte (or char by char)
  2. Prefer a buffer in your side more than in the stream to make less invocations of the read method, but don’t forget the buffer in the side of the streams
  3. Pay attention to the size of the buffers
  4. Don’t use char conversion if you only need to tranfer the content of a file
  5. Don’t hesitate to use channels to make file transfer, it’s the fastest way to make a file transfer

I’ve also made some tests, but not complete, for files in the same hard disk and here the NIO Transfer method is a lot faster than the other. I think this is because on the same disk this method can make better use of the filesystem cache.

I hope this benchmark (and its results) interested you.

Here are the sources of the benchmark : Java Benchmark of File Copy methods

 

From http://www.baptiste-wicht.com/2010/08/file-copy-in-java-benchmark

Published at DZone with permission of Baptiste Wicht, author and DZone MVB.

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Dan Blanks replied on Mon, 2010/08/09 - 12:32am

Some interesting comparisons. What about execing out to the operating system and using it's native copy command? Would it be faster or slower than your benchmarks?

Cosmin Mutu replied on Mon, 2010/08/09 - 12:44am

thanks for sharing! nice post.

prunge replied on Mon, 2010/08/09 - 1:11am

One important thing to know about NIO transfers: they can memory map the file.  This becomes an issue if you are copying large (1 GB or larger) files on a 32-bit system since you can run out of address space resulting in an OutOfMemoryError.  We reverted to using a stream copy after encountering this issue.

Martijn Verburg replied on Mon, 2010/08/09 - 3:20am

Have you tried the NIO.2 API in the open JDK Betas?

Kasper Nielsen replied on Mon, 2010/08/09 - 4:04am

So let me get this straight. The first individual test for each file runs from a cold file cache and all tests are only run 1 time. Seriously, I do not hope anyone are making any "big" decisions based on these results.
And 125 ms for reading a 5 kb text file, come on, it should be obvious for anyone that the benchmark is flawed.

Erin Garlock replied on Mon, 2010/08/09 - 8:46am

Kasper,

The 125ms for 5k file, was a byte-by-byte copy in both Naive Stream and Naive Reader, i.e. it was deliberately meant to be the worst possible performance.

Kasper Nielsen replied on Mon, 2010/08/09 - 10:08am in response to: Erin Garlock

Erin, a byte-to-byte copy of 5k will never take 125 ms. Try swapping the order in which the different tests are run and you will get a another result.

Baptiste Wicht replied on Wed, 2010/09/01 - 12:46am

Hi, Sorry for the lack of answers, I don't get here often. I more often read the comments of my blog where the original post is. There were a lot of updates in the original post here : http://www.baptiste-wicht.com/2010/08/file-copy-in-java-benchmark/
Some interesting comparisons. What about execing out to the operating system and using it's native copy command? Would it be faster or slower than your benchmarks?
I've tested it in the update of my benchmark. For large files, it's faster than using the other methods.
One important thing to know about NIO transfers: they can memory map the file. This becomes an issue if you are copying large (1 GB or larger) files on a 32-bit system since you can run out of address space resulting in an OutOfMemoryError. We reverted to using a stream copy after encountering this issue.
Thank you, I didn't know that.
Have you tried the NIO.2 API in the open JDK Betas?
No, but in the update of the post, I tested the Path copyTo() method.
So let me get this straight. The first individual test for each file runs from a cold file cache and all tests are only run 1 time. Seriously, I do not hope anyone are making any "big" decisions based on these results. And 125 ms for reading a 5 kb text file, come on, it should be obvious for anyone that the benchmark is flawed.
Erin, a byte-to-byte copy of 5k will never take 125 ms. Try swapping the order in which the different tests are run and you will get a another result.
For me the result is not false, but taking the context, it was not great. The copy was made from Ext4->NTFS. I think it's that who slows a little the test. The update of the post show results only with Ext4. It takes about 10 ms on the new benchmark.

Raging Infernoz replied on Sun, 2010/10/31 - 3:00pm

I await a retest with the http://www.ellipticgroup.com Benchmark framework, preferably also on a Windows NT OS e.g. Windows 7.

For now I rely on NIO Transfer Copy, until I have time to backport NIO2 and the native copy from Java 1.7, via JNA.  Sun/Oracle should have done this themselves, so it could be tested/used in Java 1.5/1.6, given how long it will take 1.7 to be released and approved for business use!

 

King Sam replied on Fri, 2012/02/24 - 10:52am

Benchmarking file copying is very hard and I assume all these tests are actually run with the source file in the file cache. To do a more complete test probably would require dropping the cache before each copy. Maybe copy hundreds of different files of the same size and reboot/drop cache before running each test?

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.