How to Ensure Data Reaches Storage, Bypassing Memory/Cache/Buffered-Io

How to ensure data reaches storage, bypassing memory/cache/buffered-IO?

See "Ensuring data reaches disk."

In short, the safest policy is O_DIRECT + fsync() at appropriate points.

Does a memory barrier ensure that the cache coherence has been completed?

The memory barriers present on the x86 architecture - but this is true in general - not only guarantee that all the previous1 loads, or stores, are completed before any subsequent load or store is executed - they also guarantee that the stores have became globally visible.

By globally visible it is meant that other cache-aware agents - like other CPUs - can see the store.

Other agents non aware of the caches - like a DMA capable device - will not usually see the store if the target memory has been marked with a cache type that doesn't enforce an immediate write into memory.

This has nothing to do with the barrier it-self, it is a simple fact of the x86 architecture: caches are visible to the programmer and when dealing with hardware they are usually disabled.

Intel is purposely generic on the description of the barriers because it doesn't want to tie her-self to a specific implementation.

You need to think in abstract: globally visible implies that the hardware will take all the necessary steps to make the store globally visible. Period.

To understand the barriers however it is worth taking a look at the current implementations.

Note that Intel is free to turn the modern implementation up-side down at will, as long it keep the visible behaviour correct.

A store in an x86 CPU is executed in the core, then placed in the store buffer.

For example mov DWORD [eax+ebx*2+4], ecx, once decoded is stalled until eax, ebx and ecx are ready2 then it is dispatched to an execution unit capable of computing its address.

When the execution is done the store has become a pair (address, value) that is moved into the store buffer.

The store is said to be completed locally (in the core).

The store buffer allows the OoO part of the CPU to forget about the store and consider it completed even if an attempt to write is has not even been made yet.

Upon specific events, like a serialization event, an exception, the execution of a barrier or the exhaustion of the buffer, the CPU flushes the store buffer.

The flush is always in order - First In, First written.

From the store buffer the store enters the realm of the cache.

It can be combined yet into another buffer called the Write Combining buffer (and later written into memory by-passing the caches) if the target address is marked with a WC cache type, it can be written into the L1D cache, the L2, the L3 or the LLC if it is not one of the previous if the cache type is WB or WT.

It can also be written directly in memory if the cache type is UC or WT.


As today that's what it means to become globally visible: leave the store buffer.
Beware of two very important things:

  1. The cache type still influences the visibility.

    Globally visible doesn't mean visible in memory, it means visible where loads from other cores will see it.

    If the memory region is WB cacheable, the load could end in the cache, so it is globally visible there - only for the agent aware of the existence of the cache. (But note that most DMA on modern x86 is cache-coherent).
  2. This also apply to the WC buffer that is non-coherent.

    The WC is not kept coherent - its purpose is to coalesce the stores to memory areas where the order doesn't matter, like a framebuffer. This is not really globally visible yet, only after the write-combining buffer is flushed can anything outside the core see it.

sfence does exactly that: wait for all the previous stores to complete locally and then drains the store buffer.

Since each store in the store buffer can potentially miss, you see how heavy such instruction is. (But out-of-order execution including later loads can continue. Only mfence would block later loads from being globally visible (reading from L1d cache) until after the store buffer finishes committing to cache.)

But does sfence wait for the store to propagates into other caches?

Well, no.

Because there is not propagation - lets see what a write into the cache implies from an high-level perspective.

The cache is kept coherent among all the processors with the MESI protocol (MESIF for multi-socket Intel systems, MOESI for AMD ones).

We will only see MESI.

Suppose the writes indexes the cache line L, and suppose all the processors has this line L in their caches with the same value.

The state of this line is Shared, in every CPU.

When our stores lands in the cache, L is marked as Modified and a special transaction is made on the internal bus (or QPI for multi-socket Intel systems) to invalidate line L in other processors.

If L was not initially in the S state, the protocol is changed accordingly (e.g. if L is in state Exclusive no transactions on the bus are done[1]).

At this point the write is complete and sfence completes.

This is enough to keep the cache coherent.

When another CPU request line L, our CPU snoops that request and L is flushed to memory or into the internal bus so the other CPU will read the updated version.

The state of L is set to S again.

So basically L is read on-demand - this makes sense since propagating the write to other CPU is expensive and some architectures do it by writing L back into memory (this works because the other CPU has L in state Invalid so it must read it from memory).


Finally it is not true that sfence et all are normally useless, on the contrary they are extremely useful.

It is just that normally we don't care how other CPUs see us making our stores - but acquiring a lock without an acquiring semantic as defined, for example, in C++, and implemented with the fences, is totally nuts.

You should think of the barriers as Intel says: they enforce the order of global visibility of memory accesses.

You can help your self understanding this by thinking of the barriers as enforcing the order or writing into the cache. The cache coherence will then take rest of assuring that a write to a cache is globally visible.

I can't help but stress out one more time that cache coherency, global visibility and memory ordering are three different concepts.

The first guarantees the second, that is enforced by the third.

Memory ordering -- enforces --> Global visibility -- needs -> Cache coherency
'.______________________________'_____________.' '
Architectural ' '
'._______________________________________.'
micro-architectural

Footnotes:

  1. In program order.
  2. That was a simplification. On Intel CPUs, mov [eax+ebx*2+4], ecx decodes into two separate uops: store-address and store-data. The store-address uop has to wait until eax and ebx are ready, then it is dispatched to an execution unit capable of computing its address. That execution unit writes the address into the store buffer, so later loads (in program order) can check for store-forwarding.

    When ecx is ready, the store-data uop can dispatch to the store-data port, and write the data into the same store buffer entry.

    This can happen before or after the address is known, because the store-buffer entry is reserved probably in program order, so the store buffer (aka memory order buffer) can keep track of load / store ordering once the address of everything is eventually known, and check for overlaps. (And for speculative loads that ended up violating x86's memory ordering rules if another core invalidated the cache line they loaded from before the earliest point they were architecturally allowed to laod. This leads to a memory-order mis-speculation pipeline clear.)

How best to manage Linux's buffering behavior when writing a high-bandwidth data stream?

You can use the posix_fadvise() with the POSIX_FADV_DONTNEED advice (possibly combined with calls to fdatasync()) to make the system flush the data and evict it from the cache.

See this article for a practical example.

How to prevent C read() from reading from cache

My though is that if you write ENOUGH data, then there simply won't be enough memory to cache it, and thus SOME data must be written to disk.

You can also, if you want to make sure that small writes to your file works, try writing ANOTHER large file (either from the same process or a different one - for example, you could start a process like dd if=/dev/zero of=myfile.dat bs=4k count=some_large_number) to force other data to fill the cache.

Another "trick" may be to "chew up" some (more like most) of the RAM in the system - just allocate a large lump of memory, then write to some small part of it at a time - for example, an array of integers, where you write to every 256th entry of the array in a loop, moving to one step forward each time - that way, you walk through ALL of the memory quickly, and since you are writing continuously to all of it, the memory will have to be resident. [I used this technique to simulate a "busy" virtual machine when running VM tests].

The other option is of course to nobble the caching system itself in OS/filesystem driver, but I would be very worried about doing that - it will almost certainly slow the system down to a slow crawl, and unless there is an existing option to disable it, you may find it hard to do accurately/correctly/reliably.

Safely Persisting to Disk

Calling fsync() would flush the dirty pages in the buffer cache to the disk. This depends on the load on your server, as having a large number of dirty pages in the cache and initiating a flush could causes the system to hung or get to an unresponsive state. However its recommended tune some of the kernel turntables with optimal values for vm.dirty_expire_centisecs, vm.dirty_background_ratio to make sure all writes a safe and quick and not kept in the cache for a long time. Having lower values could slow average I/O speed as constantly trying to write dirty pages out will just trigger the I/O congestion code more frequently.

Alternatively, some of the databases provide Direct I/O as a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing caches. Direct I/O is mostly used in applications (databases) that manage their own caches with the O_DIRECT flag.

C Disk I/O - write after read at the same offset of a file will make read throughput very low

And I use direct I/O, so the I/O will bypass the caching mechanism of the file system and access directly the blocks on the disk.

If you don't have file system on the device and directly using the device to read/write, then there is no file system cache comes into the picture.

The behavior you observed is typical of disk access and IO behavior.

I found that if I don't do pwrite, read throughput is 125 MB/s

Reason: The disk just reads data, it doesn't have to go back to the offset and write data, 1 less operation.

If I do pwrite, read throughput will be 21 MB/s, and write throughput is 169 MB/s.

Reason: Your disk might have better write speed, probably disk buffer is caching write rather than directly hitting the media.

If I do pread after pwrite, write throughput is 115 MB/s, and read throughput is 208 MB/s.

Reason: Most likely data written is being cached at disk level and so read gets data from cache instead of media.

To get optimal performance, you should use asynchronous IOs and number of blocks at a time. However, you have to use reasonable number of blocks and can't use very large number. Should find out what is optimal by trial and error.

Are direct reads possible in postgresql

You are mixing up two things.

  • the kernel cache where the kernel caches files to serve reads and writes more efficiently

  • the database shared memory cache (shared buffers) where the database caches table and index blocks

All database use the latter (Oracle calls it “database buffer cache”), because without caching performance would be abysmal.

With direct I/O you avoid the kernel cache, that is, all read and write requests go directly to disk.

There is no way in PostgreSQL to use direct I/O.

However, it has been recognized that buffered I/O comes with its own set of problems (e.g., a write request may succeed, a sync request that tells the kernel to flush the data to disk may fail, but the next sync request for the same (unpersisted!) data may not return an error any more). Relevant people hold the opinion that it might be a good idea to move to direct I/O eventually to avoid having to deal with such problems, but that would be a major change, and I wouldn't hold my breath until it happens.



Related Topics



Leave a reply



Submit