Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?
A single threaded process will only ever run on a single core at any given moment in time.
But it's possible for the OS to migrate a thread from one core to another. While the OS will try to keep a thread on the same core, if the last core the thread ran on is busy and another core is free, the OS can migrate the thread.
While debugging, will a single threaded application jump between cores?
- yes
- yes
- it depends.
What's happening is that the scheduler picks the best CPU (let's define it as any one of: physical cpu, core, hyperthread) for your process to run depending on a lot of variables. Generally a scheduler will attempt to keep a process on the same CPU to avoid expensive cache and TLB misses between CPUs, but it has to make a trade-off between the cost of moving a process to a CPU it didn't previously run on and the cost of waiting for the previous CPU to become available.
Let's say that your process X was running on CPU 0. It became non-runnable for some reason (waiting for a lock or I/O or preempted because it used too much cpu and some other process needs to run). Another process Y starts running on CPU 0. For some reason your process X becomes runnable again. CPU 1 is idle. Now the scheduler can make four possible decisions:
- Wait for process Y to finish running, then run process X on CPU 0.
- Preempt process Y, run process X on CPU 0, move process Y to CPU 1.
- Preempt process Y, run process X on CPU 0 until it stops running, resume process Y on CPU 0.
- Run process X on CPU 1.
Different schedulers in different operating systems will make different decisions. Some prefer lower latency for all processes disregarding the cost of switching to a different CPU, so they'll always pick 4. Some prefer strong affinity, so they'll pick 1. In many cases the scheduler makes an educated guess about how much cache state process X has left on CPU 0 and decides that since the process was suspended for some amount of time, it probably doesn't have that much cache/TLB left on CPU 0 and it doesn't really cost that much to move it to a different CPU. Many will take into account the memory bus layout and calculate a cost of moving the process and in your case maybe the scheduler knows that it's cheap to make the move. The scheduler might also make a best effort guess of how process Y behaves and if it's likely to finish running soon it might wait for it to finish. etc.
Generally, unless you're doing something that really needs to squeeze out the last nanosecond of the performance from your application you don't need to worry about it. The scheduler will make a good enough decision and if it doesn't it will not matter that much for most applications anyway. As far as you need to know in most cases your process is being moved between CPUs somewhere between each instruction and never.
Related Topics
Signal Handling in Asm: Why am I Receiving Sigsegv When Invoking the Sys_Pause Syscall
Removing Sensitive Data from Git. "Fatal: Ambiguous Argument 'Rm'"
Compiler Can't Find Libxml/Parser.H
Multiplication with Expr in Shell Script
How to Call Accept() for One Socket from Several Threads Simultaneously
Why Using Pipe for Sort (Linux Command) Is Slow
Mixing Static Libraries and Shared Libraries
Using Bash Script to Feed Input to Command Line
Where Is the Stack Memory Allocated from for a Linux Process
Specifying Non-Standard Baud Rate for Ftdi Virtual Serial Port Under Linux
What Is the Use of _Iomem in Linux While Writing Device Drivers
Can't Change Tomcat 7 Heap Size
How to Convert Pptx Files to Jpg or Png (For Each Slide) on Linux
Hadoop: «Error:Java_Home Is Not Set»
Using Jq to Fetch Key Value from JSON Output