Why Is I++ Not Atomic

Why is i++ not atomic?

i++ is probably not atomic in Java because atomicity is a special requirement which is not present in the majority of the uses of i++. That requirement has a significant overhead: there is a large cost in making an increment operation atomic; it involves synchronization at both the software and hardware levels that need not be present in an ordinary increment.

You could make the argument that i++ should have been designed and documented as specifically performing an atomic increment, so that a non-atomic increment is performed using i = i + 1. However, this would break the "cultural compatibility" between Java, and C and C++. As well, it would take away a convenient notation which programmers familiar with C-like languages take for granted, giving it a special meaning that applies only in limited circumstances.

Basic C or C++ code like for (i = 0; i < LIMIT; i++) would translate into Java as for (i = 0; i < LIMIT; i = i + 1); because it would be inappropriate to use the atomic i++. What's worse, programmers coming from C or other C-like languages to Java would use i++ anyway, resulting in unnecessary use of atomic instructions.

Even at the machine instruction set level, an increment type operation is usually not atomic for performance reasons. In x86, a special instruction "lock prefix" must be used to make the inc instruction atomic: for the same reasons as above. If inc were always atomic, it would never be used when a non-atomic inc is required; programmers and compilers would generate code that loads, adds 1 and stores, because it would be way faster.

In some instruction set architectures, there is no atomic inc or perhaps no inc at all; to do an atomic inc on MIPS, you have to write a software loop which uses the ll and sc: load-linked, and store-conditional. Load-linked reads the word, and store-conditional stores the new value if the word has not changed, or else it fails (which is detected and causes a re-try).

Writing long and double is not atomic in Java?

It's not atomic because it's a multiple-step operation at the machine code level. That is, longs and doubles are longer than the processor's word length.

How come INC instruction of x86 is not atomic?

Why would it be? The processor core still needs to read the value stored at the memory location, calculate the increment of it, and then store it back. There's a latency between reading and storing, and in the mean time another operation could have affected that memory location.

Even with out-of-order execution, processor cores are 'smart' enough not to trip over their own instructions and wouldn't be responsible for modifying this memory in the time gap. However, another core could have issued an instruction that modifies that location, a DMA transfer could have affected that location, or other hardware touched that memory location somehow.

Is iinc atomic in Java?

No its not

  • Retrieve the current value of c.
  • Increment the retrieved value by 1.
  • Store the incremented value back in c.

Java Documentation for Atomicity and Thread Interference

You need to either use synchronized keyword or use AtomicXXX methods for Thread safety.

UPDATE:

public synchronized void increment() {
c++;
}

or

AtomicInteger integer = new AtomicInteger(1);
//somewhere else in code
integer.incrementAndGet();

Also read: Is iinc atomic in Java?

long and double assignments are not atomic - How does it matter?

Where improper programming with an int may result in stale values being observed, improper programming with a long may result in values that never actually existed being observed.

This could theoretically matter for a system that only needs to be eventually-correct and not point-in-time correct, so skipped synchronization for performance. Although skipping a volatile field declaration in the interest of performance seems on casual inspection like foolishness.

What does atomic mean in programming?

Here's an example: Suppose foo is a variable of type long, then the following operation is not an atomic operation (in Java):

foo = 65465498L;

Indeed, the variable is written using two separate operations: one that writes the first 32 bits, and a second one which writes the last 32 bits. That means that another thread might read the value of foo, and see the intermediate state.

Making the operation atomic consists in using synchronization mechanisms in order to make sure that the operation is seen, from any other thread, as a single, atomic (i.e. not splittable in parts), operation. That means that any other thread, once the operation is made atomic, will either see the value of foo before the assignment, or after the assignment. But never the intermediate value.

A simple way of doing this is to make the variable volatile:

private volatile long foo;

Or to synchronize every access to the variable:

public synchronized void setFoo(long value) {
this.foo = value;
}

public synchronized long getFoo() {
return this.foo;
}
// no other use of foo outside of these two methods, unless also synchronized

Or to replace it with an AtomicLong:

private AtomicLong foo;

When to NOT use Atomic Operations?

Because atomics are slower. They slow down the calling thread, and they may slow down other threads as well, potentially even ones not accessing the same atomics. They may also inhibit the compiler from performing certain reordering optimizations that it would otherwise perform.

Are java primitive ints atomic by design or by accident?

All memory accesses in Java are atomic by default, with the exception of long and double (which may be atomic, but don't have to be). It's not put very clearly to be honest, but I believe that's the implication.

From section 17.4.3 of the JLS:

Within a sequentially consistent
execution, there is a total order over
all individual actions (such as reads
and writes) which is consistent with
the order of the program, and each
individual action is atomic and is
immediately visible to every thread.

and then in 17.7:

Some implementations may find it
convenient to divide a single write
action on a 64-bit long or double
value into two write actions on
adjacent 32 bit values. For
efficiency's sake, this behavior is
implementation specific; Java virtual
machines are free to perform writes to
long and double values atomically or
in two parts.

Note that atomicity is very different to volatility though.

When one thread updates an integer to 5, it's guaranteed that another thread won't see 1 or 4 or any other in-between state, but without any explicit volatility or locking, the other thread could see 0 forever.

With regard to working hard to get atomic access to bytes, you're right: the VM may well have to try hard... but it does have to. From section 17.6 of the spec:

Some processors do not provide the
ability to write to a single byte. It
would be illegal to implement byte
array updates on such a processor by
simply reading an entire word,
updating the appropriate byte, and
then writing the entire word back to
memory. This problem is sometimes
known as word tearing, and on
processors that cannot easily update a
single byte in isolation some other
approach will be required.

In other words, it's up to the JVM to get it right.

How can an atomic operation be not a synchronization operation?

The compiler and the CPU are allowed to reorder memory accesses. It's the as-if rule and it assumes a single-threaded process.

In multithreaded programs, the memory order parameter specifies how memory accesses are to be ordered around an atomic operation. This is the synchronization aspect (the "acquire-release semantics") of an atomic operation that is separate from the atomicity aspect itself:

int x = 1;
std::atomic<int> y = 1;

// Thread 1
x++;
y.fetch_add(1, std::memory_order_release);

// Thread 2
while ((y.load(std::memory_order_acquire) == 1)
{ /* wait */ }
std::cout << x << std::endl; // x is 2 now

Whereas with a relaxed memory order we only get atomicity, but not ordering:

int x = 1;
std::atomic<int> y = 1;

// Thread 1
x++;
y.fetch_add(1, std::memory_order_relaxed);

// Thread 2
while ((y.load(std::memory_order_relaxed) == 1)
{ /* wait */ }
std::cout << x << std::endl; // x can be 1 or 2, we don't know

Indeed as Herb Sutter explains in his excellent atomic<> weapons talk, memory_order_relaxed makes a multithreaded program very difficult to reason about and should be used in very specific cases only, when there is no dependency between the atomic operation and any other operation before or after it in any thread (very rarely the case).



Related Topics



Leave a reply



Submit