Why Is Integer Assignment on a Naturally Aligned Variable Atomic on X86

Why is integer assignment on a naturally aligned variable atomic on x86?

"Natural" alignment means aligned to its own type width. Thus, the load/store will never be split across any kind of boundary wider than itself (e.g. page, cache-line, or an even narrower chunk size used for data transfers between different caches).

CPUs often do things like cache-access, or cache-line transfers between cores, in power-of-2 sized chunks, so alignment boundaries smaller than a cache line do matter. (See @BeeOnRope's comments below). See also Atomicity on x86 for more details on how CPUs implement atomic loads or stores internally, and Can num++ be atomic for 'int num'? for more about how atomic RMW operations like atomic<int>::fetch_add() / lock xadd are implemented internally.


First, this assumes that the int is updated with a single store instruction, rather than writing different bytes separately. This is part of what std::atomic guarantees, but that plain C or C++ doesn't. It will normally be the case, though. The x86-64 System V ABI doesn't forbid compilers from making accesses to int variables non-atomic, even though it does require int to be 4B with a default alignment of 4B. For example, x = a<<16 | b could compile to two separate 16-bit stores if the compiler wanted.

Data races are Undefined Behaviour in both C and C++, so compilers can and do assume that memory is not asynchronously modified. For code that is guaranteed not to break, use C11 stdatomic or C++11 std::atomic. Otherwise the compiler will just keep a value in a register instead of reloading every time your read it, like volatile but with actual guarantees and official support from the language standard.

Before C++11, atomic ops were usually done with volatile or other things, and a healthy dose of "works on compilers we care about", so C++11 was a huge step forward. Now you no longer have to care about what a compiler does for plain int; just use atomic<int>. If you find old guides talking about atomicity of int, they probably predate C++11. When to use volatile with multi threading? explains why that works in practice, and that atomic<T> with memory_order_relaxed is the modern way to get the same functionality.

std::atomic<int> shared;  // shared variable (compiler ensures alignment)

int x; // local variable (compiler can keep it in a register)
x = shared.load(std::memory_order_relaxed);
shared.store(x, std::memory_order_relaxed);
// shared = x; // don't do that unless you actually need seq_cst, because MFENCE or XCHG is much slower than a simple store

Side-note: for atomic<T> larger than the CPU can do atomically (so .is_lock_free() is false), see Where is the lock for a std::atomic?. int and int64_t / uint64_t are lock-free on all the major x86 compilers, though.


Thus, we just need to talk about the behaviour of an instruction like mov [shared], eax.


TL;DR: The x86 ISA guarantees that naturally-aligned stores and loads are atomic, up to 64bits wide. So compilers can use ordinary stores/loads as long as they ensure that std::atomic<T> has natural alignment.

(But note that i386 gcc -m32 fails to do that for C11 _Atomic 64-bit types inside structs, only aligning them to 4B, so atomic_llong can be non-atomic in some cases. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65146#c4). g++ -m32 with std::atomic is fine, at least in g++5 because https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65147 was fixed in 2015 by a change to the <atomic> header. That didn't change the C11 behaviour, though.)


IIRC, there were SMP 386 systems, but the current memory semantics weren't established until 486. This is why the manual says "486 and newer".

From the "Intel® 64 and IA-32 Architectures Software Developer Manuals, volume 3", with my notes in italics. (see also the x86 tag wiki for links: current versions of all volumes, or direct link to page 256 of the vol3 pdf from Dec 2015)

In x86 terminology, a "word" is two 8-bit bytes. 32 bits are a double-word, or DWORD.

###Section 8.1.1 Guaranteed Atomic Operations

The Intel486 processor (and newer processors since) guarantees that the following basic memory
operations will always be carried out atomically:

  • Reading or writing a byte
  • Reading or writing a word aligned on a 16-bit boundary
  • Reading or writing a doubleword aligned on a 32-bit boundary (This is another way of saying "natural alignment")

That last point that I bolded is the answer to your question: This behaviour is part of what's required for a processor to be an x86 CPU (i.e. an implementation of the ISA).


The rest of the section provides further guarantees for newer Intel CPUs: Pentium widens this guarantee to 64 bits.

The
Pentium processor (and newer processors since) guarantees that the
following additional memory operations will always be carried out
atomically:

  • Reading or writing a quadword aligned on a 64-bit boundary
    (e.g. x87 load/store of a double, or cmpxchg8b (which was new in Pentium P5))
  • 16-bit accesses to uncached memory locations that fit within a 32-bit data bus.

The section goes on to point out that accesses split across cache lines (and page boundaries) are not guaranteed to be atomic, and:

"An x87 instruction or an SSE instructions that accesses data larger than a quadword may be implemented using
multiple memory accesses."



AMD's manual agrees with Intel's about aligned 64-bit and narrower loads/stores being atomic

So integer, x87, and MMX/SSE loads/stores up to 64b, even in 32-bit or 16-bit mode (e.g. movq, movsd, movhps, pinsrq, extractps, etc.) are atomic if the data is aligned. gcc -m32 uses movq xmm, [mem] to implement atomic 64-bit loads for things like std::atomic<int64_t>. Clang4.0 -m32 unfortunately uses lock cmpxchg8b bug 33109.

On some CPUs with 128b or 256b internal data paths (between execution units and L1, and between different caches), 128b and even 256b vector loads/stores are atomic, but this is not guaranteed by any standard or easily queryable at run-time, unfortunately for compilers implementing std::atomic<__int128> or 16B structs.

(Update: x86 vendors have decided that the AVX feature bit also indicates atomic 128-bit aligned loads/stores. Before that we only had https://rigtorp.se/isatomic/ experimental testing to verify it.)

If you want atomic 128b across all x86 systems, you must use lock cmpxchg16b (available only in 64bit mode). (And it wasn't available in the first-gen x86-64 CPUs. You need to use -mcx16 with GCC/Clang for them to emit it.)

Even CPUs that internally do atomic 128b loads/stores can exhibit non-atomic behaviour in multi-socket systems with a coherency protocol that operates in smaller chunks: e.g. AMD Opteron 2435 (K10) with threads running on separate sockets, connected with HyperTransport.


Intel's and AMD's manuals diverge for unaligned access to cacheable memory. The common subset for all x86 CPUs is the AMD rule. Cacheable means write-back or write-through memory regions, not uncacheable or write-combining, as set with PAT or MTRR regions. They don't mean that the cache-line has to already be hot in L1 cache.

  • Intel P6 and later guarantee atomicity for cacheable loads/stores up to 64 bits as long as they're within a single cache-line (64B, or 32B on very old CPUs like Pentium III).
  • AMD guarantees atomicity for cacheable loads/stores that fit within a single 8B-aligned chunk. That makes sense, because we know from the 16B-store test on multi-socket Opteron that HyperTransport only transfers in 8B chunks, and doesn't lock while transferring to prevent tearing. (See above). I guess lock cmpxchg16b must be handled specially.

Possibly related: AMD uses MOESI to share dirty cache-lines directly between caches in different cores, so one core can be reading from its valid copy of a cache line while updates to it are coming in from another cache.

Intel uses MESIF, which requires dirty data to propagate out to the large shared inclusive L3 cache which acts as a backstop for coherency traffic. L3 is tag-inclusive of per-core L2/L1 caches, even for lines that have to be in the Invalid state in L3 because of being M or E in a per-core L1 cache. The data path between L3 and per-core caches is only 32B wide in Haswell/Skylake, so it must buffer or something to avoid a write to L3 from one core happening between reads of two halves of a cache line, which could cause tearing at the 32B boundary.

The relevant sections of the manuals:

The P6 family processors (and newer Intel processors
since) guarantee that the following additional memory operation will
always be carried out atomically:

  • Unaligned 16-, 32-, and 64-bit accesses to cached memory that fit within a cache line.

AMD64 Manual 7.3.2 Access Atomicity

Cacheable, naturally-aligned single loads or stores of up to a quadword are atomic on any processor
model, as are misaligned loads or stores of less than a quadword that
are contained entirely within a naturally-aligned quadword

Notice that AMD guarantees atomicity for any load smaller than a qword, but Intel only for power-of-2 sizes. 32-bit protected mode and 64-bit long mode can load a 48 bit m16:32 as a memory operand into cs:eip with far-call or far-jmp. (And far-call pushes stuff on the stack.) IDK if this counts as a single 48-bit access or separate 16 and 32-bit.

There have been attempts to formalize the x86 memory model, the latest one being the x86-TSO (extended version) paper from 2009 (link from the memory-ordering section of the x86 tag wiki). It's not usefully skimmable since they define some symbols to express things in their own notation, and I haven't tried to really read it. IDK if it describes the atomicity rules, or if it's only concerned with memory ordering.



Atomic Read-Modify-Write

I mentioned cmpxchg8b, but I was only talking about the load and the store each separately being atomic (i.e. no "tearing" where one half of the load is from one store, the other half of the load is from a different store).

To prevent the contents of that memory location from being modified between the load and the store, you need lock cmpxchg8b, just like you need lock inc [mem] for the entire read-modify-write to be atomic. Also note that even if cmpxchg8b without lock does a single atomic load (and optionally a store), it's not safe in general to use it as a 64b load with expected=desired. If the value in memory happens to match your expected, you'll get a non-atomic read-modify-write of that location.

The lock prefix makes even unaligned accesses that cross cache-line or page boundaries atomic, but you can't use it with mov to make an unaligned store or load atomic. It's only usable with memory-destination read-modify-write instructions like add [mem], eax.

(lock is implicit in xchg reg, [mem], so don't use xchg with mem to save code-size or instruction count unless performance is irrelevant. Only use it when you want the memory barrier and/or the atomic exchange, or when code-size is the only thing that matters, e.g. in a boot sector.)

See also: Can num++ be atomic for 'int num'?



Why lock mov [mem], reg doesn't exist for atomic unaligned stores

From the instruction reference manual (Intel x86 manual vol2), cmpxchg:

This instruction can be used with a LOCK prefix to allow the
instruction to be executed atomically. To simplify the interface to
the processor’s bus, the destination operand receives a write cycle
without regard to the result of the comparison. The destination
operand is written back if the comparison fails; otherwise, the source
operand is written into the destination. (The processor never produces
a locked read without also producing a locked write
.)

This design decision reduced chipset complexity before the memory controller was built into the CPU. It may still do so for locked instructions on MMIO regions that hit the PCI-express bus rather than DRAM. It would just be confusing for a lock mov reg, [MMIO_PORT] to produce a write as well as a read to the memory-mapped I/O register.

The other explanation is that it's not very hard to make sure your data has natural alignment, and lock store would perform horribly compared to just making sure your data is aligned. It would be silly to spend transistors on something that would be so slow it wouldn't be worth using. If you really need it (and don't mind reading the memory too), you could use xchg [mem], reg (XCHG has an implicit LOCK prefix), which is even slower than a hypothetical lock mov.

Using a lock prefix is also a full memory barrier, so it imposes a performance overhead beyond just the atomic RMW. i.e. x86 can't do relaxed atomic RMW (without flushing the store buffer). Other ISAs can, so using .fetch_add(1, memory_order_relaxed) can be faster on non-x86.

Fun fact: Before mfence existed, a common idiom was lock add dword [esp], 0, which is a no-op other than clobbering flags and doing a locked operation. [esp] is almost always hot in L1 cache and won't cause contention with any other core. This idiom may still be more efficient than MFENCE as a stand-alone memory barrier, especially on AMD CPUs.

xchg [mem], reg is probably the most efficient way to implement a sequential-consistency store, vs. mov+mfence, on both Intel and AMD. mfence on Skylake at least blocks out-of-order execution of non-memory instructions, but xchg and other locked ops don't. Compilers other than gcc do use xchg for stores, even when they don't care about reading the old value.



Motivation for this design decision:

Without it, software would have to use 1-byte locks (or some kind of available atomic type) to guard accesses to 32bit integers, which is hugely inefficient compared to shared atomic read access for something like a global timestamp variable updated by a timer interrupt. It's probably basically free in silicon to guarantee for aligned accesses of bus-width or smaller.

For locking to be possible at all, some kind of atomic access is required. (Actually, I guess the hardware could provide some kind of totally different hardware-assisted locking mechanism.) For a CPU that does 32bit transfers on its external data bus, it just makes sense to have that be the unit of atomicity.


Since you offered a bounty, I assume you were looking for a long answer that wandered into all interesting side topics. Let me know if there are things I didn't cover that you think would make this Q&A more valuable for future readers.

Since you linked one in the question, I highly recommend reading more of Jeff Preshing's blog posts. They're excellent, and helped me put together the pieces of what I knew into an understanding of memory ordering in C/C++ source vs. asm for different hardware architectures, and how / when to tell the compiler what you want if you aren't writing asm directly.

Is assigning a pointer in C program considered atomic on x86-64

Bear in mind that atomicity alone is not enough for communicating between threads. Nothing prevents the compiler and CPU from reordering previous/subsequent load and store instructions with that "atomic" store. In old days people used volatile to prevent that reordering but that was never intended for use with threads and doesn't provide means to specify less or more restrictive memory order (see "Relationship with volatile" in there).

You should use C11 atomics because they guarantee both atomicity and memory order.

Is it guaranteed that x86 instruction fetch is atomic, so that rewriting an instruction with a short jump is safe for concurrent thread execution?

Instruction fetch is not architecturally guaranteed to be atomic. Although, in practice, an instruction cache fill transaction is, by definition atomic, meaning that the line being filled in the cache cannot change before the transaction completes (which happens when the whole line is stored in the IFU, but not necessarily in the instruction cache itself). The instruction bytes are also delivered to the input buffer of the instruction predecode unit at some atomic granularity. On modern Intel processors, the instruction cache line size is 64 bytes and the input width of the predcode unit is 16 bytes with an address aligned on a 16-byte boundary. (Note that the 16 bytes input can be delivered to the predecode unit before the entire transaction of fetching the cache line containing these 16 bytes completes.) Therefore, an instruction aligned on a 16-byte boundary is guaranteed to be fetched atomically, together with at least one byte of the following contiguous instruction, depending on the size of the instruction. But this is a microarchitectural guarantee, not architectural.

It seems to me that by instruction fetch atomicity you're referring to atomicity at the granularity of individual instructions, rather than some fixed number of bytes. Either way, instruction fetch atomicity is not required for hotpatching to work correctly. It's actually impractical because instruction boundaries are not known at the time of fetch.

If instruction fetch is atomic, it may still be possible to fetch, execute, and retire the instruction being modified with only one of the two bytes being written (or none of the bytes or both of the bytes). The allowed orders in which writes reach GO depend on the effective memory types of the target memory locations. So hotpatching would still not be safe.

Intel specifies in Section 8.1.3 of the SDM V3 how self-modifying code (SMC) and cross-modifying code (XMC) should work to guarantee correctness on all Intel processors. Regarding SMC, it says the following:

To write self-modifying code and ensure that it is compliant with
current and future versions of the IA-32 architectures, use one of the
following coding options:

(* OPTION 1 *)

Store modified code (as data) into code segment;

Jump to new code or an intermediate location;

Execute new code;

(* OPTION 2 *)

Store modified code (as data) into code segment;

Execute a serializing instruction; (* For example, CPUID instruction *)

Execute new code;

The use of one of these options is not required for programs intended
to run on the Pentium or Intel486 processors, but are recommended to
ensure compatibility with the P6 and more recent processor families.

Note that the last statement is incorrect. The writer probably intended to say instead: "The use of one of these options is not required for programs intended to run on the Pentium or later processors, but are recommended to ensure compatibility with the Intel486 processors." This is explained in Section 11.6, from which I want to quote an important statement:

A write to a memory location in a code segment that is currently
cached in the processor causes the associated cache line (or lines) to
be invalidated. This check is based on the physical address of the
instruction. In addition, the P6 family and Pentium processors check
whether a write to a code segment may modify an instruction that has
been prefetched for execution. If the write affects a prefetched
instruction, the prefetch queue is invalidated. This latter check is
based on the linear address of the instruction

Briefly, prefetch buffers are used to maintain instruction fetch requests and their results. Starting with the P6, they were replaced with streaming buffers, which have a different design. The manual still uses the term "prefetch buffers" for all processors. The important point here is that, with respect to what is guaranteed architecturally, the check in the prefetch buffers is done using linear addresses, not physical addresses. That said, probably all Intel processors do these checks using physical addresses, which can be proved experimentally. Otherwise, this can break the fundamental sequential program order guarantee. Consider the following sequence of operations being executed on the same processor:

Store modified code (as data) into code segment;  
Execute new code;

Assume that the page offsets of the linear addresses being written to are the same as the offsets of the linear addresses being from fetched, but the linear page numbers are different. However, both pages map to the same physical page. If we go by what's guaranteed architecturally, it's possible for instruction from the old code to retire even they are positioned later in program order with respect to the writes that modify the code. That's because an SMC condition cannot be detected based on comparing linear addresses alone, and the store are allowed to retire and later instruction can retire before the writes are committed. In practice, this doesn't happen, but it's possible architecturally. On AMD processors, the AMD APM V2 Section 7.6.1 states that these checks are based on physical addresses. Intel should do this too and make it official.

So to fully adhere to the Intel manual, there should be a fully serializing instruction as follows:

Store modified code (as data) into code segment;
Execute a serializing instruction; (\* For example, CPUID instruction \*)
Execute new code;

This is identical to OPTION 2 from the manual. However, for the sake of compatibility with the 486, some 486 processors don't support the CPUID instruction. The following code works on all processors:

Store modified code (as data) into code segment;
If (486 or AMD before K5) Jump to new code;
ElseIf (Intel P5 or later) Execute a serializing instruction; (\* For example, CPUID instruction \*)
Else; (\* Do nothing on AMD K5 and later \*)
Execute new code;

Otherwise, if it's guaranteed that there is no aliasing, the following code works correctly on modern processors:

Store modified code (as data) into code segment;
Execute new code;

As already mentioned, in practice, this also works correctly in any case (aliasing or not).

If the instructions being modified are stored in uncacheable memory locations (UC or WC), a fully serializing instruction is required on some or all of Intel P5+ and AMD K5+ processors, unless it can be guaranteed that the locations being written to were never fetched from prior to completing all needed modifications.

Within the context of hotpatching, the thread that modified the bytes and the thread that executes the code may happen to run on the same logical processor. If the threads are in different processes, switching between them requires changing the current process context, which involves executing at least one fully serializing instruction to change the linear address space. The architectural requirements for SMC end up being satisfied anyway. Code modifications don't have to happen atomically, even if they cross multiple instructions.

Section 8.1.3 states the following regarding XMC:

To write cross-modifying code and ensure that it is compliant with
current and future versions of the IA-32 architecture, the following
processor synchronization algorithm must be implemented:

(* Action of Modifying Processor *)

Memory_Flag := 0; (* Set Memory_Flag to value other than 1 *)

Store modified code (as data) into code segment;

Memory_Flag := 1;

(* Action of Executing Processor *)

WHILE (Memory_Flag ≠ 1)

Wait for code to update;

ELIHW;

Execute serializing instruction; (* For example, CPUID instruction *)

Begin executing modified code;

(The use of this option is not required for programs intended to run
on the Intel486 processor, but is recommended to ensure compatibility
with the Pentium 4, Intel Xeon, P6 family, and Pentium processors.)

A fully serializing instruction is required here for a different reason mentioned in the errata of some Intel processors: cross-processor snoops may only snoop the instruction cache and not the prefetch buffers or internal pipeline buffers. The processor may speculatively fetch instructions before it observes all modifications and, without full serialization, it may execute a mix of old and new instruction bytes. A fully serializing instruction prevents speculative fetches. The code without serialization is called unsynchronized XMC. As the manual states, serialization is not needed on the 486.

AMD processors also require a fully serializing instruction to be executed on the executing processor before the modified instructions. On AMD, MFENCE is fully serializing and more convenient than CPUID.

Intel's algorithm assumes that the executing processor remains in a waiting state until Memory_Flag is changed to 1. The initial state of Memory_Flag is assumed to be not 1. If both processors are executing in parallel, the modifying processor should make sure that the executing processor is outside of the execution region before modifying any instructions. This can be achieved using a readers–writer mutex in general.

Now let's get back to the hotpatching example you've provided and check if it works correctly with respect to only architectural guarantees on Intel processors. It can be modeled as follows:

(\* Action of Modifying Processor \*)    
Store 0xEB;
Store offset;

(\* Action of Executing Processor \*)
Execute the first instruction of the function, which is at least two bytes in size;

If the two bytes cross an instruction cache line boundary, the following may occur:

  1. It's possible for the executing processor to fetch the line containing the first byte into the input buffer of the predcode unit, but not yet the other line.
  2. The modifying processor (atomically or not) writes both bytes.
  3. Before the bytes reach GO, the instruction cache of the executing processor is snooped for both cache lines and are invalidated if found.
  4. At this point, the first byte has already been delivered into the pipeline and not flushed by the RFO snoop (although it should've been on the Pentium P5 and later). The second line is now fetched, which contains a modified byte. The processor proceeds to decode and execute an instruction that begins with an old byte and a new byte.

By the way, instruction fetch atomicity at the instruction granularity would have prevented this scenario from occurring.

I think this scenario is also possible if the two bytes cross a predecode chunk boundary (16 bytes) and both are in the same line due to the errata mentioned earlier. Although this is very unlikely because the cache line has to be invalidate exactly between two consecutive 16-byte chunk fetches into the predecode unit.

If the two bytes are fully contained in the same 16-byte fetch unit and if the compiler emitted code such that the two bytes may not be written atomically as a single unit, it's possible that one byte reaches GO and fetched and executed by the executing processor before the other byte reaches GO. Therefore, in this case as well, the executing processor may attempt to execute an instruction that begins with a new byte and an old byte.

Finally, if the two bytes are fully contained in the same 16-byte fetch unit and if the compiler emitted code such that the two written bytes reach GO atomically, the executing processor will either execute the old bytes or the new bytes, never mixed bytes. The readers–writer mutex semantics are provided naturally.

The default 16-byte alignment of functions ensures that the two bytes are in the same 16-byte fetch unit. A single 2-byte store instruction to a 16-byte aligned address is guaranteed to be atomic on the 486 and later (Section 8.1.1). However, the stores *(u8*)from = 0xEB; and *(u8*)(from + 1) = (u8)offset; are not guaranteed to be compiled into a single store instruction. With multiple store instructions, an interrupt can occur on the modifying processor before all reach GO, greatly increasing the chance of the executing processor executing mixed bytes. This is a bug. Relaying on 16-byte alignment works in practice, but it violates Section 8.1.3.

On AMD processors, the first two bytes must also be modified atomically, but the 16-byte alignment is not sufficient according to the architectural requirements in APM V2 Section 7.6.1. The instruction being modified must be contained entirely within a naturally-aligned quadword. If the compiler emits a dummy 2-byte instruction at the beginning of the function, then it would satisfy this requirement.

AMD officially supports unsynchronized XMC if some requirements are satisfied. Intel doesn't architecturally support unsynchronized XMC at all, although it does work in practice if some requirements are satisfied as already discussed.

Regarding the following comment:

// 3) HotPatch
//
// The HotPatch hooking is assuming the presence of an header with padding
// and a first instruction with at least 2-bytes.
//
// The reason to enforce the 2-bytes limitation is to provide the minimal
// space to encode a short jump. HotPatch technique is only rewriting one
// instruction to avoid breaking a sequence of instructions containing a
// branching target.

Well, if the first instruction is only one byte in size, irrespective of alignment and atomicity, an interrupt can occur on the executing processor immediately after retiring the first instruction but before retiring the second one. If the modifying processor modified the bytes before the executing processor returns from handling the interrupt, then when it returns, the behavior is unpredictable. So even if there are no branch targets inside the function, the first instruction still has to be at least 2 bytes in size.

Do I need an atomic if a value is only written?

According to §5.1.2.4 ¶25 and ¶4 of the ISO C11 standard, two different threads writing to the same memory l


Related Topics



Leave a reply



Submit