Using a Global Variable With a Thread

Using a global variable with a thread

You just need to declare a as a global in thread2, so that you aren't modifying an a that is local to that function.

def thread2(threadname):
global a
while True:
a += 1

In thread1, you don't need to do anything special, as long as you don't try to modify the value of a (which would create a local variable that shadows the global one; use global a if you need to)>

def thread1(threadname):
#global a # Optional if you treat a as read-only
while a < 10:
print a

Can global variables be accessed from a thread within a new process?

It is hard to understand what you are exactly asking. But there are two main questions here:

Can global variables be accessed from another process?

No, not without some form of inter-process communication, and even then you would be passing a copy of that variable to the other process. Each process has its own global state.

Can global variables be accessed from another thread?

Yes, a thread who lives in the same process can access a global variable, but you must ensure the safety of any memory that is accessed by multiple threads. Meaning, threads should not access writable memory at the same time as other threads or you run the risk of one thread writing to the memory while another thread attempts to read it.

Answering Question Above

If I understand the setup correctly, each of your child processes has their own global variable queue. Each of those queues should be accessible only to the threads that are spawned within that process.

Thread issue using global variable

Read some tutorial about pthreads. You cannot expect reproducible behavior when accessing and modifying a global data in several threads (without additional coding precautions related to synchronization). AFAIU your code exhibits some tricky undefined behavior and you should be scared (maybe it is only unspecified behavior in your case). To explain the observed concrete behavior you need to dive into implementation details (and you don't have time for that: studying the generated assembler code, the behavior of your particular hardware, etc...).

Also (since info is a pointer to an int)

int calc = info;

don't make a lot of sense (I guess you made some typo). On some systems (like my x86-64 running Linux), a pointer is wider than an int (so calc loses half of the bits from info). On other (rare) systems, it could be smaller in size. Somtimes (i686 running Linux) it might have the same size. You should consider intptr_t from <stdint.h> if you want to cast pointers to integral values and back.

Actually, you should protect the access to that global data (inside i, perhaps accessed thru a pointer) with a mutex, or use in C11 atomic operations since that data is used by several concurrent threads.

So you could declare a global mutex like

 pthread_mutext_t mtx = PTHREAD_MUTEX_INITIALIZER;

(or use pthread_mutex_init) then in your sum you would code

i = i + calc;

(see also pthread_mutex_lock and pthread_mutex_lock(3p)). Of course you should code likewise in your main.

Locking a mutex is a bit expensive (typically, several dozens of times more than an addition), even in the case it was unlocked. You might consider atomic operations if you can code in C11, since you deal with integers. You'll declare atomic_int i; and you would use atomic_load and atomic_fetch_add on it.

If you are curious, see also pthreads(7) & futex(7).

Multi-threaded programming is really difficult (for all). You cannot expect behavior to be reproducible in general, and your code could apparently behave as expected and still be very wrong (and will work differently on some different system). Read also about memory models, CPU cache, cache coherence, concurrent computing...

Consider using GCC thread sanitizer instrumentation options and/or valgrind's helgrind

python global variable inside a thread

GLB is a mutable object. To let one thread see a consistent value while another thread modifies it you can either protect the object temporarily with a lock (the modifier will wait) or copy the object. In your example, a copy seems the best option. In python, a slice copy is atomic so does not need any other locking.

import random
import time
import threading

GLB = [0,0]

#this is a thread
def t1():
while True:
GLB[0] = random.randint(0, 100)
GLB[1] = 1
print GLB

#this is a thread
def t2():
while True:
static = GLB[:]
if static[0]<=30:
for i in range(50):
print i," ",static

a = threading.Thread(target=t1)

b = threading.Thread(target=t2)

while True:

Protecting global variable when using multiple threads

If I may suggest the use of pthread mutex that also achieves mutual exclusion to shared variables, the example below accomplishes. It might be quicker in what you are trying to accomplish.

#include <pthread.h>

//Shared global variable
float total = 0;

//Shared lock
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;

//some thread function that adds 1,000 to total one thread at a time
void *compute(){

//If no thread is using the lock, acquire lock and add 1,000 to total. The lock prevents other threads from executing this piece of code during a context switch.
total += 1000;
//Release lock

return NULL;


This way, if thread T1 executes the compute function and the lock is free, it will acquire the lock, increment total, and then release the lock. If thread T2 calls compute while T1 has the lock, T2 will not be able to continue beyond that point in the code and will wait until the lock resource is freed by T1. Thus it protects the global variable; threads that wish to mutate shared variables are unable to do so at the same time while one thread holds the lock.

How to share global variable across thread in php?

Ordinarily, threads are executed in the same address space as the process that started them.

So, in C, the new threads are able to directly access variables on the stack of the main program.

When you create a new thread in PHP, it has a separate heap, it must execute in a separate address space.

This means that by default, you cannot share global state between threads.

This is the normal threading model of PHP - Share Nothing.

What pthreads does is introduce objects which are able to be manipulated in many contexts, and are able to share data among those contexts.

The equivalent PHP code might look something like:

class Atomic extends Threaded {

public function __construct($value = 0) {
$this->value = $value;

public function inc() {
return $this->value++;

/* ... */
private $value;

class Test extends Thread {

public function __construct(Atomic $atomic) {
$this->atomic = $atomic;

public function run() {

private $atomic;

$atomic = new Atomic();
$threads = [];

for ($thread = 0; $thread < 2; $thread++) {
$threads[$thread] = new Test($atomic);

foreach ($threads as $thread)


Notice that, Mutex is not used directly (and has been removed from the latest versions of pthreads). Using Mutex is dangerous because you do not have enough control over execution to use them safely; If you lock a mutex and then for whatever reason the interpreter suffers a fatal error, you will not be able to release the mutex, deadlocks will follow ...

Nor is it necessary because single instruction operations on the object scope are atomic.

When it comes to implementing exclusion, you can use the Threaded::synchronized API to great effect.

Where exclusion is required, the run method might look more like:

    public function run() {
/* exclusive */
}, $this->atomic);

Finally, a lesson in naming things ...

You appear to be, and are forgiven for being under the impression that, there is some parallel to be drawn between Posix Threads (the standard, pthread) and pthreads, the PHP extension ...

pthreads, the PHP extension happens to use Posix Threads, but it doesn't implement anything like Posix Threads.

The name pthreads should be taken to mean PHP threads ... naming things is hard.

Is it dangerous to read global variables from separate threads at potentially the same time?

§1.10 [intro.multithread] (quoting N4140):

6 Two expression evaluations conflict if one of them modifies a
memory location (1.7) and the other one accesses or modifies the same
memory location.

23 Two actions are potentially concurrent if

  • they are performed by different threads, or
  • they are unsequenced, and at least one is performed by a signal handler.

The execution of a program contains a data race if it contains two
potentially concurrent conflicting actions, at least one of which is
not atomic, and neither happens before the other, except for the
special case for signal handlers described below. Any such data race
results in undefined behavior.

Purely concurrent reads do not conflict, and so is safe.

If at least one of the threads write to a memory location, and another reads from that location, then they conflict and are potentially concurrent. The result is a data race, and hence undefined behavior, unless appropriate synchronization is used, either by using atomic operations for all reads and writes, or by using synchronization primitives to establish a happens before relationship between the read and the write.

Related Topics

Leave a reply