Optimal Buffer Size for Write(2)

Optimal buffer size for write(2)

As discussed in comments, I believe the exact size don't matter that much, assuming it is :

  • a small multiple of the file system size (see comment by Joachim Pileborg suggesting stat(".") etc.)
  • a power of two (because computers and kernels like them)
  • not too big (e.g. fitting in some cache inside your processor, e.g. L2 cache)
  • aligned in memory (e.g. to a page size using posix_memalign).

So a power of two between 16kbytes and a few megabytes should probably fit. Most of the time is spent on reading the disk. Filesystem and disk benchmarks are quite flat in that range.

4Kbytes seems to often be the page size and the disk chunk size.

Of course, you can tune things, even tune, when making the file system with mke2fs, the file system block size.

And I'll bet that the optimal is really dependent upon your hardware (SSD, hard disks?) and your system (and its load).

Optimal Buffer size for read-process-write

See what Microsoft has to say about IO size: http://technet.microsoft.com/en-us/library/cc938632.aspx. Basically, they say you should probably do IO in 64K blocks.

On *NIX platforms, struct stat has a st_blksize member which says what should be a minimal IO block size.

what's the proper buffer size for 'write' function?

There is probably some advantage in doing writes which are multiples of the filesystem block size, especially if you are updating a file in place. If you write less than a partial block to a file, the OS has to read the old block, combine in the new contents and then write it out. This doesn't necessarily happen if you rapidly write small pieces in sequence because the updates will be done on buffers in memory which are flushed later. Still, once in a while you could be triggering some inefficiency if you are not filling a block (and a properly aligned one: multiple of block size at an offset which is a multiple of the block size) with each write operation.

This issue of transfer size does not necessarily go away with mmap. If you map a file, and then memcpy some data into the map, you are making a page dirty. That page has to be flushed at some later time: it is indeterminate when. If you make another memcpy which touches the same page, that page could be clean now and you're making it dirty again. So it gets written twice. Page-aligned copies of multiples-of a page size will be the way to go.

What is the optimal file output buffer size?

You usually don't need the best buffer size, which may requires querying the OS for system parameters and do complex estimations or even benchmarking on the target environment, and it's dynamic. Lucky you just need a value that is good enough.

I would say a 4K~16K buffer suit most normal usages. Where 4K is the magic number for page size supported by normal machine (x86, arm) and also multiple of usual physical disk sector size(512B or 4K).

If you are dealing with huge amount of data (giga-bytes) you may realise simple fwrite-model is inadequate for its blocking nature.

What would be an ideal buffer size?

SOURCE:

How do you determine the ideal buffer size when using FileInputStream?

Optimum buffer size is related to a number of things: file system
block size, CPU cache size and cache latency.

Most file systems are configured to use block sizes of 4096 or 8192.
In theory, if you configure your buffer size so you are reading a few
bytes more than the disk block, the operations with the file system
can be extremely inefficient (i.e. if you configured your buffer to
read 4100 bytes at a time, each read would require 2 block reads by
the file system). If the blocks are already in cache, then you wind up
paying the price of RAM -> L3/L2 cache latency. If you are unlucky and
the blocks are not in cache yet, the you pay the price of the
disk->RAM latency as well.

This is why you see most buffers sized as a power of 2, and generally
larger than (or equal to) the disk block size. This means that one of
your stream reads could result in multiple disk block reads - but
those reads will always use a full block - no wasted reads.

Ensuring this also typically results in other performance friendly parameters affecting both reading and subsequent processing: data bus width alignment, DMA alignment, memory cache line alignment, whole number of virtual memory pages.

How do you determine the ideal buffer size when using FileInputStream?

Optimum buffer size is related to a number of things: file system block size, CPU cache size and cache latency.

Most file systems are configured to use block sizes of 4096 or 8192. In theory, if you configure your buffer size so you are reading a few bytes more than the disk block, the operations with the file system can be extremely inefficient (i.e. if you configured your buffer to read 4100 bytes at a time, each read would require 2 block reads by the file system). If the blocks are already in cache, then you wind up paying the price of RAM -> L3/L2 cache latency. If you are unlucky and the blocks are not in cache yet, the you pay the price of the disk->RAM latency as well.

This is why you see most buffers sized as a power of 2, and generally larger than (or equal to) the disk block size. This means that one of your stream reads could result in multiple disk block reads - but those reads will always use a full block - no wasted reads.

Now, this is offset quite a bit in a typical streaming scenario because the block that is read from disk is going to still be in memory when you hit the next read (we are doing sequential reads here, after all) - so you wind up paying the RAM -> L3/L2 cache latency price on the next read, but not the disk->RAM latency. In terms of order of magnitude, disk->RAM latency is so slow that it pretty much swamps any other latency you might be dealing with.

So, I suspect that if you ran a test with different cache sizes (haven't done this myself), you will probably find a big impact of cache size up to the size of the file system block. Above that, I suspect that things would level out pretty quickly.

There are a ton of conditions and exceptions here - the complexities of the system are actually quite staggering (just getting a handle on L3 -> L2 cache transfers is mind bogglingly complex, and it changes with every CPU type).

This leads to the 'real world' answer: If your app is like 99% out there, set the cache size to 8192 and move on (even better, choose encapsulation over performance and use BufferedInputStream to hide the details). If you are in the 1% of apps that are highly dependent on disk throughput, craft your implementation so you can swap out different disk interaction strategies, and provide the knobs and dials to allow your users to test and optimize (or come up with some self optimizing system).

What is a good buffer size for socket programming?

Even if you're sending more data than that, it may well not be available in one call to Receive.

You can't determine how much data the server has sent - it's a stream of data, and you're just reading chunks at a time. You may read part of what the server sent in one Send call, or you may read the data from two Send calls in one Receive call. 8K is a reasonable buffer size - not so big that you'll waste a lot of memory, and not so small that you'll have to use loads of wasted Receive calls. 4K or 16K would quite possibly be fine too... I personally wouldn't start going above 16K for network buffers - I suspect you'd rarely fill them.

You could experiment by trying to use a very large buffer and log how many bytes were received in each call - that would give you some idea of how much is generally available - but it wouldn't really show the effect of using a smaller buffer. What concerns do you have over using an 8K buffer? If it's performance, do you have any evidence that this aspect of your code is a performance bottleneck?



Related Topics



Leave a reply



Submit