Not being one for letting a problem get the best of me, I took another look at the asynchronous overlapped IO problem. If you read my last post on the subject, you know I’ve done a lot of work on this already. None of the things I said last time have changed at all. If you want to do asynchronous and un-buffered IO in C# using the native file stream calls you can’t… So, I rolled my own. The kicker is, I don’t use any unmanaged code to do this. No call to VirtualAlloc() or anything else using DLL imports. Oh, and the speed is spectacular.
The Goal
My ultimate goal was to build a routine that would do un-buffered asynchronous IO. That means I don’t want the OS doing any buffering or funny stuff with the IO’s I issue. That goes for reads and writes. SQL Server uses this method to harden writes to the disk and it also performs well with excellent predictability. If you have ever use windows to do a regular copy you will see it eating up memory to buffer both reads and writes. If you copy the same file a couple of times you will notice that the first time it runs in about the speed you expect it, but the second time it may run twice as fast. This is all Windows, buffering as much data and holding on to that buffer. That’s great for smaller files but if you are pushing around multi-gigabyte files it is a disaster. As the system becomes starved for memory it pages then starts throttling back. Your 100MB/sec copy is now crawling along at 20MB/sec.
Where we left off..
I had settled on a simple routine that would allow me to do un-buffered reads from a file and write to a buffered file ether on disk or across the network.
internal class UnBufferedFileCopy
{
public static int CopyBufferSize = 8 * 1024 * 1024;
public static byte[] Buffer = new byte[CopyBufferSize];
const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;
public static int CopyFileUnbuffered(string inputfile, string outputfile)
{
var infile = new FileStream(inputfile, FileMode.Open, FileAccess.Read
, FileShare.None, 8, FileFlagNoBuffering | FileOptions.SequentialScan);
var outfile = new FileStream(outputfile, FileMode.Create, FileAccess.Write
, FileShare.None, 8, FileOptions.WriteThrough);
int bytesRead;
while ((bytesRead = infile.Read(Buffer, 0, CopyBufferSize)) != 0)
{
outfile.Write(Buffer, 0, bytesRead);
}
outfile.Close();
outfile.Dispose();
infile.Close();
infile.Dispose();
return 1;
}
}
There are two problems with this routine. First off, only the read from source is truly un-buffered. C# offers the write through flag and I thought that would be enough. I fired up process monitor and watched the IO issued on writes and it wasn’t buffer sized requests, it was always broken up into 64k chunks. So, the read request would fetch say 16MB of data and pass that to the write request who would then break that up into chunks. This wasn’t the behavior I was going for! Doing some additional research I found adding the no buffering flag to the write through flag gave me the results I was after, almost. You can’t do un-buffered writes. Synchronous or asynchronous doesn’t matter. To do a un-buffered write the buffer area that you build from the byte array must be page aligned in memory and all calls must return a multiple of the page size. Again, this just isn’t possible in managed code. So, I investigated a horrible kludge of a solution. I do un-buffered writes until I get to the last block of data. Then I close and reopen the file in a buffered mode and write the last block. It isn’t pretty but it works. It also means that I can’t use write through and un-buffered on a file smaller than the buffer size. Not a huge deal but something to be aware of if you are doing a lot of small files. If you are going the small file route the first routine will probably be OK.
internal class UnBufferedFileCopy
{
public static int CopyBufferSize = 8 * 1024 * 1024;
public static byte[] Buffer1 = new byte[CopyBufferSize];
const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;
public static int CopyFileUnbuffered(string inputfile, string outputfile)
{
var infile = new FileStream(inputfile, FileMode.Open, FileAccess.Read
, FileShare.None, 8, FileFlagNoBuffering | FileOptions.SequentialScan);
//open output file set length to prevent growth and file fragmentation and close it.
//We have to do it this way so we can do unbuffered writes to it later
outfile = new FileStream(outputfile, FileMode.Create, FileAccess.Write
, FileShare.None, 8, FileOptions.WriteThrough);
outfile.SetLength(infilesize);
outfile.Dispose();
//open file for write unbuffered
outfile = new FileStream(outputfile, FileMode.Open, FileAccess.Write
, FileShare.None, 8, FileOptions.WriteThrough | FileFlagNoBuffering);
long totalbyteswritten;
int bytesRead1;
//hold back one buffer
while (totalbyteswritten < infilesize - CopyBufferSize)
{
bytesRead1 = _infile.Read(Buffer1, 0, CopyBufferSize);
totalbyteswritten = totalbyteswritten + CopyBufferSize;
outfile.Write(Buffer1, 0, bytesRead1);
}
//close the file handle that was using unbuffered and write through
outfile.Dispose();
//open file for write buffered We do this so we can write the tail of the file
//it is a cludge but hey you get what you get in C#
outfile = new FileStream(outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
FileOptions.WriteThrough);
//go to the right position in the file
outfile.Seek(infilesize - bytesRead1, 0);
//flush the last buffer syncronus and buffered.
outfile.Write(Buffer1, 0, bytesRead1);
outfile.Dispose();
infile.Dispose();
return 1;
}
}
This is as close to fully un-buffered IO on both the read and write side of things. There is a lot going on, but it is still synchronous all the way. If you look at performance monitor it will show you a saw tooth pattern as you read then write since you are only ever doing one or the other. Using this to copy a file across the LAN to another server never got better than 75MB/Sec throughput. Not horrible but a long way from the 105MB/Sec I get from something like FastCopy or TerraCopy. Heck, it’s not even close to the theoretical 125MB/Sec a gigabit connection could support. That leaves the last piece of the puzzle, going asynchronous.
Threading in C#, To Produce or Consume?
We know that using the asynchronous file IO built into C# isn’t an option. That doesn’t mean we can’t pattern something of our own like it. I’ve done quite a bit of threading in C#. It isn’t as difficult as C/C++ but you can still blow your foot off. It adds a whole other level of complexity to your code. This is where a little thought and design on paper and using a flow chart can help you out quite a bit. Also, it’s good to research design patterns and multi-threading. A lot of smart people have tackled these problems and have developed well designed solutions. Our particular problem is a classic producer consumer pattern, a simple one at that. We have a producer, the read thread, putting data in a buffer. We have a consumer, the write thread, that takes that data and writes it to disk. My first priority is to model this as simply as possible. I’m not worried with multiple readers or writers. I am concerned with locking and blocking. Keeping the time something has to be locked to a minimum is going to be key. That lead me to a simple solution. One read thread and the buffer it reads into, one write thread and the buffer it reads from and one intermediate buffer to pass data between them. Basically, an overlap buffer that is the same size as the read and write buffer. To give you a better visual example before showing you the code here are a couple of flow charts.
Read File
http://www.lucidchart.com/documents/view/4cac057f-d81c-472e-9764-52c00afcbe04
Write File
http://www.lucidchart.com/documents/view/4cac0726-dd14-46a6-8d44-53710afcbe04
There a few of things you need to be aware of. There is no guarantee of order on thread execution. That is why I’m using a lock object and a semaphore flag to let me know if the buffer is actually available to be written or read from. Keep the lock scope small. The lock can be a bottleneck and basically drop you back into a synchronous mode. Watch for deadlocks. With the lock and the semaphore flag in play if your ordering is wrong you can get into a deadlock between the two threads where they just sit and spin waiting for ether the lock or the flag to clear. At this point I’m confident I don’t have any race or deadlocking situations.
Here is a simplified sample, I’m serious this is as small a sample as I could code up.
internal class AsyncUnbuffCopy
{
//file names
private static string _inputfile;
private static string _outputfile;
//syncronization object
private static readonly object Locker1 = new object();
//buffer size
public static int CopyBufferSize;
private static long _infilesize;
//buffer read
public static byte[] Buffer1;
private static int _bytesRead1;
//buffer overlap
public static byte[] Buffer2;
private static bool _buffer2Dirty;
private static int _bytesRead2;
//buffer write
public static byte[] Buffer3;
//total bytes read
private static long _totalbytesread;
private static long _totalbyteswritten;
//filestreams
private static FileStream _infile;
private static FileStream _outfile;
//secret sauce for unbuffered IO
const FileOptions FileFlagNoBuffering = (FileOptions)0x20000000;
private static void AsyncReadFile()
{
//open input file
_infile = new FileStream(_inputfile, FileMode.Open, FileAccess.Read, FileShare.None, CopyBufferSize,
FileFlagNoBuffering);
//if we have data read it
while (_totalbytesread < _infilesize)
{
_bytesRead1 = _infile.Read(Buffer1, 0, CopyBufferSize);
lock (Locker1)
{
while (_buffer2Dirty)Monitor.Wait(Locker1);
Buffer.BlockCopy(Buffer1, 0, Buffer2, 0, _bytesRead1);
_buffer2Dirty = true;
Monitor.PulseAll(Locker1);
_bytesRead2 = _bytesRead1;
_totalbytesread = _totalbytesread + _bytesRead1;
}
}
//clean up open handle
_infile.Close();
_infile.Dispose();
}
private static void AsyncWriteFile()
{
//open output file set length to prevent growth and file fragmentation and close it.
//We have to do it this way so we can do unbuffered writes to it later
_outfile = new FileStream(_outputfile, FileMode.Create, FileAccess.Write, FileShare.None, 8,
FileOptions.WriteThrough);
_outfile.SetLength(_infilesize);
_outfile.Close();
_outfile.Dispose();
//open file for write unbuffered
_outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
FileOptions.WriteThrough | FileFlagNoBuffering);
while (_totalbyteswritten < _infilesize - CopyBufferSize)
{
lock (Locker1)
{
while (!_buffer2Dirty) Monitor.Wait(Locker1);
Buffer.BlockCopy(Buffer2, 0, Buffer3, 0, _bytesRead2);
_buffer2Dirty = false;
Monitor.PulseAll(Locker1);
_totalbyteswritten = _totalbyteswritten + CopyBufferSize;
}
_outfile.Write(Buffer3, 0, CopyBufferSize);
}
//close the file handle that was using unbuffered and write through
_outfile.Close();
_outfile.Dispose();
lock (Locker1)
{
while (!_buffer2Dirty) Monitor.Wait(Locker1);
//open file for write buffered We do this so we can write the tail of the file
//it is a cludge but hey you get what you get in C#
_outfile = new FileStream(_outputfile, FileMode.Open, FileAccess.Write, FileShare.None, 8,
FileOptions.WriteThrough);
//this should always be true but I haven't run all the edge cases yet
if (_buffer2Dirty)
{
//go to the right position in the file
_outfile.Seek(_infilesize - _bytesRead2, 0);
//flush the last buffer syncronus and buffered.
_outfile.Write(Buffer2, 0, _bytesRead2);
}
}
//close the file handle that was using unbuffered and write through
_outfile.Close();
_outfile.Dispose();
}
public static int AsyncCopyFileUnbuffered(string inputfile, string outputfile, int buffersize)
{
//set file name globals
_inputfile = inputfile;
_outputfile = outputfile;
//setup single buffer size, remember this will be x3.
CopyBufferSize = buffersize * 1024 * 1024;
//buffer read
Buffer1 = new byte[CopyBufferSize];
//buffer overlap
Buffer2 = new byte[CopyBufferSize];
//buffer write
Buffer3 = new byte[CopyBufferSize];
//get input file size for later use
var f = new FileInfo(_inputfile);
long s1 = f.Length;
_infilesize = s1;
//create read thread and start it.
var readfile = new Thread(AsyncReadFile) { Name = "ReadThread", IsBackground = true };
readfile.Start();
//create write thread and start it.
var writefile = new Thread(AsyncWriteFile) { Name = "WriteThread", IsBackground = true };
writefile.Start();
//wait for threads to finish
readfile.Join();
writefile.Join();
Console.WriteLine();
return 1;
}
}
As you can see, we have gotten progressively more complex with each pass until we have finally arrived at my goal. With zero unmanaged code and only one undocumented flag I’ve built a C# program that actually does fast IO like the low level big boys. To handle the small file issue I just drop back to my old copy routine to move these files along. You can see a working sample at http://github.com/SQLServerIO/UBCopy It also has an MD5 verification built in as well.
So, how well does it work?
FastCopy 1.99r4
TotalRead = 1493.6 MB
TotalWrite = 1493.6 MB
TotalFiles = 1 (0)
TotalTime= 15.25 sec
TransRate= 97.94 MB/s
FileRate = 0.07 files/s
UBCopy 1.5.2.1851 — Managed Code
File Copy Started
%100
File Copy Done
File Size MB : 1493.62
Elapsed Seconds : 15.26
Megabytes/sec : 102.63
Done.
I think it will due just fine.