Process Scheduling from Processor Point of View

Process Scheduling from Processor point of view

In brief, it is an interrupt which gives control back to the kernel. The interrupt may appear due to any reason.
Most of the times the kernel gets control due to timer interrupt, or a key-press interrupt might wake-up the kernel.
Interrupt informing completion of IO with peripheral systems or virtually anything that changes the system state may
wake-up the kernel.

More about interrupts:

Interrupts as such are divided into top-half and bottom half. Bottom Halves are for deferring work from interrupt context.

Top-half: runs with interrupts disabled hence should be superfast, relinquish the CPU as soon as possible, usually

1) stores interrupt state flag and disables the interrupts(reset
some pin on the processor),
2) communicates with the hardware, stores state information,
delegates remaining responsibility to bottom-half,
3) restores the interrupt state flag and enables the interrupt((set
some pin on the processor).

Bottom-half: Handles the deferred work(delegated work by the top-half) runs with interrupts enabled hence may take a while before completion.

Two mechanisms are used to implement bottom-half processing.

1) Tasklets    
2) Work queues

.

If timer is the interrupt to switch back to kernel, is the interrupt a hardware interrupt???

The timer interrupt of interest under our context of discussion is the hardware timer interrupt,

Inside kernel, the word timer interrupt may either mean (architecture-dependent) hardware timer interrupts or software timer interrupts.

Read this for a brief overview.

More about timers

Remeber "Timers" are an advanced topic, difficult to comprehend.

is the interrupt a hardware interrupt??? if it is a hardware
interrupt, what is the frequency of the timer?

Read Chapter 10. Timers and Time Management

if the interval of the timer is shorter than time slice, will kernel give the CPU back the same process, which was running early?

It depends upon many factors for ex: the sheduler being used, load on the system, process priorities, things like that.
The most popular CFS doesn't really depend upon the notion of time slice for preemption!
The next suitable process as picked up by CFS will get the CPU time.

The relation between timer ticks, time-slice and context switching is not so straight-forward.

Each process has its own (dynamically calculated) time slice. The kernel keeps track of the time slice used by the process.

On SMP, the CPU specific activities such as monitoring the execution time of the currently running process is done by the interrupts raised by the local APIC timer.
The local APIC timer sends an interrupt only to its processor.

However, the default time slice is defined in include/linux/sched/rt.h

Read this.

OS: does the process scheduler runs in separate process

The process schedule could feasibly run in a separate process, but such a design would be very inefficient since you would have to swap from one process to the scheduling process (which would then have to make several system calls to the kernel) and then back to the new process, as opposed to just placing the scheduler in the kernel where you will not need system calls nor need to swap contexts more than once. Therefore, the scheduler is generally in the exclusive realm of the kernel.

Here are the steps that occur:

  1. The scheduler determines which process will run in the next time slot (through various different algorithms).

  2. The scheduler tells the Memory Managing Unit (MMU) to use the page table for the next process to run (this is done by setting a register to point to the table).

  3. The scheduler programs the Programmable Interrupt Timer (PIT) to generate an interrupt after N clock cycles.

  4. The scheduler restores the state of the registers from when the process was last running (or sets them to default values for new processes)

  5. The scheduler jumps to the address of the last instruction that was not executed in the process.

  6. After N clock cycles, an interrupt occurs and the operating system recognizes it as caused by the PIT, which is registered to be handled by the scheduler.

  7. The scheduler saves the state of the registers (including stack pointer, etc) and grabs the program counter of where the interrupt occured (and saves it as the address to jump to next time around) and then goes back to step 1.

This is just one example of how it can be done, and many of the low level details are architecture specific. Essentially all the registers (the program state) can be saved to any place in RAM (say a linked list of structures that represent processes each having space for the registers, etc) and the virtual address space (defined by page tables) can be arbitrarily swapped out.

So essentially your question:

"Can we write a program to update the registers to point to a different process?"

is simply stated, yet the answer is correct. We sure can.

How processor get to know to switch process with high prioirity process?

The process scheduler is the component of the operating system that is
responsible for deciding whether the currently running process should
continue running and, if not, which process should run next.

To help the scheduler monitor processes and the amount of CPU time that they use, a programmable interval timer interrupts the processor periodically (typically 50 or 60 times per second). This timer is programmed when the operating system initializes itself. At each interrupt, the operating system’s scheduler gets to run and decide whether the currently running process should be allowed to continue running or whether it should be suspended and another ready process allowed to run. This is the mechanism used for preemptive scheduling.

So,basically,the process scheduler runs in the same main memory,when active, but are only activated after getting invoked by interrupts. Hence,they aren't all time running.

BTW,that was a great conceptual question to answer.Best wishes for your topic.

How does the OS scheduler regain control of CPU?

The OS sets up a hardware timer (Programmable interval timer or PIT) that generates an interrupt every N milliseconds. That interrupt is delivered to the kernel and user-code is interrupted.

It works like any other hardware interrupt. For example your disk will force a switch to the kernel when it has completed an IO.

Scheduling on multiple cores with each list in each processor vs one list that all processes share

Generally there is one, giant problem when it comes to multi-core CPU systems - cache coherency.

What does cache coherency mean?

Access to main memory is hard. Depending on the memory frequency, it can take between a few thousand to a few million cycles to access some data in RAM - that's a whole lot of time the CPU is doing no useful work. It'd be significantly better if we minimized this time as much as possible, but the hardware required to do this is expensive, and typically must be in very close proximity to the CPU itself (we're talking within a few millimeters of the core).

This is where the cache comes in. The cache keeps a small subset of main memory in close proximity to the core, allowing accesses to this memory to be several orders of magnitude faster than main memory. For reading this is a simple process - if the memory is in the cache, read from cache, otherwise read from main memory.

Writing is a bit more tricky. Writing to the cache is fast, but now main memory still holds the original value. We can update that memory, but that takes a while, sometimes even longer than reading depending on the memory type and board layout. How do we minimize this as well?

The most common way to do so is with a write-back cache, which, when written to, will flush the data contained in the cache back to main memory at some later point when the CPU is idle or otherwise not doing something. Depending on the CPU architecture, this could be done during idle conditions, or interleaved with CPU instructions, or on a timer (this is up to the designer/fabricator of the CPU).

Why is this a problem?

In a single core system, there is only one path for reads and writes to take - they must go through the cache on their way to main memory, meaning the programs running on the CPU only see what they expect - if they read a value, modified it, then read it back, it would be changed.

In a multi-core system, however, there are multiple paths for data to take when going back to main memory, depending on the CPU that issued the read or write. this presents a problem with write-back caching, since that "later time" introduces a gap in which one CPU might read memory that hasn't yet been updated.

Imagine a dual core system. A job starts on CPU 0 and reads a memory block. Since the memory block isn't in CPU 0's cache, it's read from main memory. Later, the job writes to that memory. Since the cache is write-back, that write will be made to CPU 0's cache and flushed back to main memory later. If CPU 1 then attempts to read that same memory, CPU 1 will attempt to read from main memory again, since it isn't in the cache of CPU 1. But the modification from CPU 0 hasn't left CPU 0's cache yet, so the data you get back is not valid - your modification hasn't gone through yet. Your program could now break in subtle, unpredictable, and potentially devastating ways.

Because of this, cache synchronization is done to alleviate this. Application IDs, address monitoring, and other hardware mechanisms exist to synchronize the caches between multiple CPUs. All of these methods have one common problem - they all force the CPU to take time doing bookkeeping rather than actual, useful computations.

The best method of avoiding this is actually keeping processes on one processor as much as possible. If the process doesn't migrate between CPUs, you don't need to keep the caches synchronized, as the other CPUs won't be accessing that memory at the same time (unless the memory is shared between multiple processes, but we'll not go into that here).

Now we come to the issue of how to design our scheduler, and the three main problems there - avoiding process migration, maximizing CPU utilization, and scalability.

Single Queue Multiprocessor scheduling (SQMS)

Single Queue Multiprocessor schedulers are what you suggested - one queue containing available processes, and each core accesses the queue to get the next job to run. This is fairly simple to implement, but has a couple of major drawbacks - it can cause a whole lot of process migration, and does not scale well to larger systems with more cores.

Imagine a system with four cores and five jobs, each of which takes about the same amount of time to run, and each of which is rescheduled when completed. On the first run through, CPU 0 takes job A, CPU 1 takes B, CPU 2 takes C, and CPU 3 takes D, while E is left on the queue. Let's then say CPU 0 finishes job A, puts it on the back of the shared queue, and looks for another job to do. E is currently at the front of the queue, to CPU 0 takes E, and goes on. Now, CPU 1 finishes job B, puts B on the back of the queue, and looks for the next job. It now sees A, and starts running A. But since A was on CPU 0 before, CPU 1 now needs to sync its cache with CPU 0, resulting in lost time for both CPU 0 and CPU 1. In addition, if two CPUs both finish their operations at the same time, they both need to write to the shared list, which has to be done sequentially or the list will get corrupted (just like in multi-threading). This requires that one of the two CPUs wait for the other to finish their writes, and sync their cache back to main memory, since the list is in shared memory! This problem gets worse and worse the more CPUs you add, resulting in major problems with large servers (where there can be 16 or even 32 CPU cores), and being completely unusable on supercomputers (some of which have upwards of 1000 cores).

Multi-queue Multiprocessor Scheduling (MQMS)

Multi-queue multiprocessor schedulers have a single queue per CPU core, ensuring that all local core scheduling can be done without having to take a shared lock or synchronize the cache. This allows for systems with hundreds of cores to operate without interfering with one another at every scheduling interval, which can happen hundreds of times a second.

The main issue with MQMS comes from CPU Utilization, where one or more CPU cores is doing the majority of the work, and scheduling fairness, where one of the processes on the computer is being scheduled more often than any other process with the same priority.

CPU Utilization is the biggest issue - no CPU should ever be idle if a job is scheduled. However, if all CPUs are busy, so we schedule a job to a random CPU, and a different CPU ends up becoming idle, it should "steal" the scheduled job from the original CPU to ensure every CPU is doing real work. Doing so, however, requires that we lock both CPU cores and potentially sync the cache, which may degrade any speedup we could get by stealing the scheduled job.

In conclusion

Both methods exist in the wild - Linux actually has three different mainstream scheduler algorithms, one of which is an SQMS. The choice of scheduler really depends on the way the scheduler is implemented, the hardware you plan to run it on, and the types of jobs you intend to run. If you know you only have two or four cores to run jobs, SQMS is likely perfectly adequate. If you're running a supercomputer where overhead is a major concern, then an MQMS might be the way to go. For a desktop user - just trust the distro, whether that's a Linux OS, Mac, or Windows. Generally, the programmers for the operating system you've got have done their homework on exactly what scheduler will be the best option for the typical use case of their system.

This whitepaper describes the differences between the two types of scheduling algorithms in place.

How operating system obtain process switching

There are two methods (that I can think of off the top of my head) that can cause a context switch:

1) The process/thread yields. This tends to be the most frequent cause of context switches. The thread queues and I/O request and waits for a response. The wait causes the thread/process to yield.

2) Timer, as described above. In the case of a compute-bound process (all processing, no I/O, no page faults), The CPU timer generates interrupts that the get handled in kernel mode. An operating system will have a set of house keeping task to perform on a timer interrupt. One of these will be to see if the current process has exceeded its quantum. If so, and there is another process ready at a higher priority, do a context switch.

Whether the cpu scheduling is based on processes or threads in linux?

Partially based on quantum which is an amount of basic unit of time the thread will execute for. Also I believe there is a priority level so multiple threads are competing for time on the cpu. They wait in line with other threads of same priority level and then run till they are out of quantum. Then they are sent to the back. It's not exact answer but a high level summary.

Also I'm more familar with windows but I think it is same in principles. The process is not an executable code but a unit of storage. So it would be by thread. Linux I read has a more complicated scheduling algorithm than windows (more overhead possibly as a trade off), but it is completely possible I speculate that threads of same process compete for cpu time. The difference is there is no necessary context switch cause the thread sharing process use same address space.

This would explain the diminished returns when using more threads than the physical number of cores(threads on intel). The threads of a process have a small chance of ever running at the same time. Instead they compete. So if you have 4000 threads it means the time any single one of them is running is reduced by 1/4000. However if you were to use the 4000 threads to operate on a single synchronous problem, using a shared storage to load current state you could then get performance gain by having a larger percentage of cpu time as the probability of any of the 4000 threads running is higher.



Related Topics



Leave a reply



Submit