Is the ++ Operator Thread Safe

Is the ++ operator thread safe?

As other answers have pointed out, no, ++ is not "threadsafe".

Something that I think will help as you learn about multithreading and its hazards is to start being very precise about what you mean by "threadsafe", because different people mean different things by it. Essentially the aspect of thread safety you are concerned about here is whether the operation is atomic or not. An "atomic" operation is one which is guaranteed to not be halfway complete when it is interrupted by another thread.

(There are plenty of other threading problems that have nothing to do with atomicity but which may still fall under some people's definitions of thread safety. For example, given two threads each mutating a variable, and two threads each reading the variable, are the two readers guaranteed to agree on the order in which the other two threads made mutations? If your logic depends on that, then you have a very difficult thread safety problem to deal with even if every read and write is atomic.)

In C#, practically nothing is guaranteed to be atomic. Briefly:

  • reading a 32 bit integer or float
  • reading a reference
  • writing a 32 bit integer or float
  • writing a reference

are guaranteed to be atomic (see the specification for the exact details.)

In particular, reading and writing a 64 bit integer or float is not guaranteed to be atomic. If you say:

C.x = 0xDEADBEEF00000000;

on one thread, and

C.x = 0x000000000BADF00D;

on another thread, then it is possible to on a third thread:

Console.WriteLine(C.x);

have that write out 0xDEADBEEF0BADF00D, even though logically the variable never held that value. The C# language reserves the right to make writing to a long equivalent to writing to two ints, one after the other, and in practice some chips do implement it that way. A thread switch after the first of the writes can cause a reader to read something unexpected.

The long and short of it is: do not share anything between two threads without locking something. Locks are only slow when they are contented; if you have a performance problem because of contended locks then fix whatever architectural flaw is leading to contended locks. If the locks are not contended and are still too slow, only then should you consider going to dangerous low-lock techniques.

The common low-lock technique to use here is of course to call Threading.Interlocked.Increment, which does an increment of an integer in a manner guaranteed to be atomic. (Note however that it still does not make guarantees about things like what happens if two threads are each doing interlocked increments of two different variables at different times, and other threads are trying to determine which increment happened "first". C# does not guarantee that a single consistent ordering of events is seen by all threads.)

Is the += operator thread-safe in Java?

No. The += operation is not thread-safe. It requires locking and / or a proper chain of "happens-before" relationships for any expression involving assignment to a shared field or array element to be thread-safe.

(With a field declared as volatile, the "happens-before" relationships exist ... but only on read and write operations. The += operation consists of a read and a write. These are individually atomic, but the sequence isn't. And most assignment expressions using = involve both one or more reads (on the right hand side) and a write. That sequence is not atomic either.)

For the complete story, read JLS 17.4 ... or the relevant chapter of "Java Concurrency in Action" by Brian Goetz et al.

As I know basic operations on primitive types are thread-safe ...

Actually, that is an incorrect premise:

  • consider the case of arrays
  • consider that expressions are typically composed of a sequence of operations, and that a sequence of atomic operations is not guaranteed to be atomic.

There is an additional issue for the double type. The JLS (17.7) says this:

"For the purposes of the Java programming language memory model, a single write to a non-volatile long or double value is treated as two separate writes: one to each 32-bit half. This can result in a situation where a thread sees the first 32 bits of a 64-bit value from one write, and the second 32 bits from another write."

"Writes and reads of volatile long and double values are always atomic."


In a comment, you asked:

So what type I should use to avoid global synchronization, which stops all threads inside this loop?

In this case (where you are updating a double[], there is no alternative to synchronization with locks or primitive mutexes.

If you had an int[] or a long[] you could replace them with AtomicIntegerArray or AtomicLongArray and make use of those classes' lock-free update. However there is no AtomicDoubleArray class, or even an AtomicDouble class.

(UPDATE - someone pointed out that Guava provides an AtomicDoubleArray class, so that would be an option. A good one actually.)

One way of avoiding a "global lock" and massive contention problems might be to divide the array into notional regions, each with its own lock. That way, one thread only needs to block another thread if they are using the same region of the array. (Single writer / multiple reader locks could help too ... if the vast majority of accesses are reads.)

Is the += operator thread-safe in Python?

Single opcodes are thread-safe because of the GIL but nothing else:

import time
class something(object):
def __init__(self,c):
self.c=c
def inc(self):
new = self.c+1
# if the thread is interrupted by another inc() call its result is wrong
time.sleep(0.001) # sleep makes the os continue another thread
self.c = new

x = something(0)
import threading

for _ in range(10000):
threading.Thread(target=x.inc).start()

print x.c # ~900 here, instead of 10000

Every resource shared by multiple threads must have a lock.

Is the increment (operator++) thread-safe in C++?

Thread safety is guaranteed only for atomic variables (std::atomic).

From C++ standard:

The execution of a program contains a data race if it contains two conflicting actions in different threads, at least one of which is not atomic, and neither happens before the other. Any such data race results in undefined behavior.

The compiler doesn't have to consider thread safety for non-atomic variables, so it is allowed to translate ++ to multiple operations (pseudo code):

  1. Read g_maxValue to a register
  2. Increment the value in the register
  3. Store the value to g_maxValue

Is the pre-increment operator thread-safe?

No, you should be using something like java.util.concurrent.atomic.AtomicInteger. Look at its getAndIncrement() method.

Thread safety of += operator in c++

It is not thread-safe.

To get synchronized behaviour without using blocking (mutexes) you could e.g. use the C++11 wrapper std::atomic.

std::atomic<int> a{5};

void work() {
a += 5; // Performed atomically.
}

int main() {
std::thread t1{work};
std::thread t2{work};

t1.join();
t2.join();

std::cout << a << std::endl; // Will always output 15.
}

Is the Scala List's cons-operator :: thread-safe?

To add to the answer that Jim Collins gave:

All operations on immutable data structures are generally thread safe.
The list cons operator does not modify the original list, since every list is immutable.
Instead, it creates a new list that represents the changed state of the list.

Thread synchronization issues only arise when different threads want to change the same data in memory.
This problem is called "shared state".
Immutable objects are stateless, therefore there can be no shared state.

Scala also offers mutable data structures. Look out of the term "mutable" in the package name.
Those have all the same synchronization issues as Java collections have.

By default, if you just type List inside a Scala program, without adding any special import statements, Scala will use an immutable list.
The same goes for Set, Map, etc.

Rule of thumb: If a class does not contain a var statement, and no reference to any other class that contains a var statement, it can be regarded as immutable.
That means it can be passed between threads safely.
(I know that experts can easily construct exceptions from this rule, but as long as you know nothing else, this is a pretty good rule to go by, edge cases set aside.)



Related Topics



Leave a reply



Submit