Is Timer Interrupt Independent of Whether System Is in Kernel Mode or User Mode

Is timer interrupt independent of whether system is in kernel mode or user mode?

The simple answer is that neither the execution of the hardware clock interrupt service routine, nor the scheduling of the dynamic timer handlers are affected by the mode the system was in before the hardware clock interrupt. The reason is that the clock timer interrupt is a hardware interrupt that is serviced immediately, regardless of whether the execution is in kernel or user context (assuming that the timer interrupt is enabled that is), and the interrupt service routine for the clock timer interrupt itself raises the software interrupt that runs the dynamic timer handlers.

Caveat: 1) I haven't actually proved this empirically. 2) This does not apply to tickless kernels or highres timers.

The Linux kernel code uses the word "timer" to mean several different things:

  1. the hardware timer or clock interrupt that gives the kernel its "ticks"
  2. dynamic timers - software timers used by the kernel and drivers
  3. interval timers - (setitimer and alarm system calls) software timers for user mode processes

The hardware clock or tick timer

On systems that use a hardware clock to provide the "tick", the clock timer interrupt is an architecture dependent hardware interrupt. For example, look for "timer_interrupt" in arch/powerpc/kernel/head_booke.h and then see the interrupt service routine (ISR) timer_interrupt implementation in arch/powerpc/kernel/time.c. This ISR executes immediately when the timer interrupt occurs, regardless of current execution context. This hardware interrupt differs from other hardware interrupts though, in that when it returns, processing does not return to the prior context. Instead, the scheduler is entered.

For a system that is set to produce 1000 clock interrupts per second, there is a chance that clock interrupts will sometimes be masked when other interrupts are being serviced. This is usually called the "lost ticks" problem. Without compensating for lost ticks, a loaded system could have a slowed sense of time. On some architectures the kernel compensates for lost ticks by using a finer grained hardware increment counter, whose value is read and recorded every clock timer interrupt. By comparing the increment counter value of the current tick against the increment counter value of the previous tick, the kernel can tell if a tick has been lost.

The software timers

The list of dynamic timer handlers (the type you set with the linux/timer.h) of dynamic timers that have expired is set at the end of the clock timer interrupt, before it returns. The sequence is (approximately):

[arch dependent]:timer_interrupt( )
kernel/time/tick-common.c:tick_handle_periodic( )
kernel/time/tick-common.c:tick_periodic( )
kernel/timer.c:update_process_times( )
kernel/timer.c:run_local_timers( )
kernel/softirq.c:raise_softirq(TIMER_SOFTIRQ)

I have omitted the initialilzations that set the handler for the timer_interrupt to tick_handle_periodic, and the handler for TIMER_SOFTIRQ.

The call to raise_softirq(TIMER_SOFTIRQ) generates a software interrupt that is serviced immediately. The ISR for the interrupt runs the dynamic timer queue. The timer handlers run in softirq context, with hardware interrupts enabled. When the ISR returns, the scheduler is called. This means that if there are a lot of timers set, whatever process happens to be next in the run queue will be delayed.

If there were lost ticks, then the execution of the timer handlers could be delayed, however, the delay does not depend on the contect of execution prior to running the clock timer interrupt.

Note about dynamic timer accuracy

"...the kernel cannot ensure that timer functions will start right at their expiration times. It can only ensure that they are executed either at the proper time or after with a delay of up to a few hundred milliseconds." Understanding the Linux Kernel, Bovet and Cesati, 3rd edition, O'reilly.

So, if you need better timer accuracy, you need to use highres timers.

References: Software interrupts and realtime

Embedded system interrupts

The hardware circuitry that constitutes the timer peripheral within the microcontroller is able to perform a comparison and toggle an output in CTC mode. This logic is performed in hardware, without relying on the CPU to execute software instructions. Therefore, the CTC mode compare and toggle occurs in parallel with whatever the CPU happens to be executing.

I don't understand what you mean by the timer "counts more". More as in more often or faster rate? More as in greater total counts? Regardless, I think the answer is no. The timer counts at the rate of the input clock that is driving it. In CTC mode the timer counts up to the comparison value that you have configured it for.

Will moving code into kernel space give more precise timing?

In kernel mode, you have the luxury of getting a DPC triggered in multiples of 100-nanosecond intervals without dealing with interrupts. A DPC cannot be preempted (aka interrupted by thread scheduler) because thread scheduler is also a DPC. An interrupt can still preempt a DPC though. So an interval value of 10 should do the trick for you to have a callback with utmost precision.

However you don't have access to many features such as paged memory, or a specific thread's memory space at DPC level because they run in arbitrary context. It could be useful to defer processing to your own user mode process' context using an APC which has access to more features.

Kernel threads don't get any special treatment in terms of priority. They are the same as user threads from scheduler's perspective. There are couple more higher-priority levels kernel threads can get but usually no kernel thread uses any of them. I don't think your bottleneck is thread priority. It doesn't matter how big your priority number is, having just one above everyone else is enough for you to become the "god thread" which receives top priority. Having highest priority doesn't mean that you'll get continuous attention. OS will still pause your thread to run others so quantum starvation does not occur.

Another note on Windows preemption behavior: Balance Set Manager temporarily boosts a thread's priority when a thread is signaled by an asynchronous event (GUI click, timer trigger, I/O completion) to allow completion code to finish it's procesing with less preemption. Using an async timer handler should give enough boost to prevent preemption at least for a quantum. I wonder why your code does not fall into that window. However it seems like you are not the only one having problems with timer precision: http://www.virtualdub.org/blog/pivot/entry.php?id=272

I agree with Paul on complexity of driver development, but as long as you have a good justification it's not rocket science, just more effort.

Process Scheduling from Processor point of view

In brief, it is an interrupt which gives control back to the kernel. The interrupt may appear due to any reason.
Most of the times the kernel gets control due to timer interrupt, or a key-press interrupt might wake-up the kernel.
Interrupt informing completion of IO with peripheral systems or virtually anything that changes the system state may
wake-up the kernel.

More about interrupts:

Interrupts as such are divided into top-half and bottom half. Bottom Halves are for deferring work from interrupt context.

Top-half: runs with interrupts disabled hence should be superfast, relinquish the CPU as soon as possible, usually

1) stores interrupt state flag and disables the interrupts(reset
some pin on the processor),
2) communicates with the hardware, stores state information,
delegates remaining responsibility to bottom-half,
3) restores the interrupt state flag and enables the interrupt((set
some pin on the processor).

Bottom-half: Handles the deferred work(delegated work by the top-half) runs with interrupts enabled hence may take a while before completion.

Two mechanisms are used to implement bottom-half processing.

1) Tasklets    
2) Work queues

.

If timer is the interrupt to switch back to kernel, is the interrupt a hardware interrupt???

The timer interrupt of interest under our context of discussion is the hardware timer interrupt,

Inside kernel, the word timer interrupt may either mean (architecture-dependent) hardware timer interrupts or software timer interrupts.

Read this for a brief overview.

More about timers

Remeber "Timers" are an advanced topic, difficult to comprehend.

is the interrupt a hardware interrupt??? if it is a hardware
interrupt, what is the frequency of the timer?

Read Chapter 10. Timers and Time Management

if the interval of the timer is shorter than time slice, will kernel give the CPU back the same process, which was running early?

It depends upon many factors for ex: the sheduler being used, load on the system, process priorities, things like that.
The most popular CFS doesn't really depend upon the notion of time slice for preemption!
The next suitable process as picked up by CFS will get the CPU time.

The relation between timer ticks, time-slice and context switching is not so straight-forward.

Each process has its own (dynamically calculated) time slice. The kernel keeps track of the time slice used by the process.

On SMP, the CPU specific activities such as monitoring the execution time of the currently running process is done by the interrupts raised by the local APIC timer.
The local APIC timer sends an interrupt only to its processor.

However, the default time slice is defined in include/linux/sched/rt.h

Read this.

What is intended/correct way to handle interrupts and use the WFI risc-v cpu instruction?

There should be one task/thread/process that is for idling, and it ought to look like your first bit of code.

Since the idle thread is setup to have the lowest priority, if the idle thread is running, that means that either there are no other threads to run or that all other threads are blocked.

When an interrupt happens that unblocks some other thread, the interrupt service routine should resume that blocked thread instead of the interrupted idle thread.

Note that a thread that blocks on IO is itself also interrupted — it is interrupted via its own use of ecall.  That exception is a request for IO and causes this thread to block — it cannot be resumed until the IO request is satisfied.

Thus, a thread that is blocked on IO is suspended just the same as if it was interrupted — and a clock interrupt or IO interrupt is capable of resuming a different process than the one immediately interrupted, which will happen in the case that the idle process was running and some event that a process was waiting for happens.


What I do is use the scratch csr to point to the context block for the currently running process/thread.  On interrupt, I save the fewest amount of registers necessary to (start to) service the interrupt.  If the interrupt results in some other process/thread becoming runable, then when resuming from interupt, I check process priorities, and may choose a context switch instead of resuming whatever was interrupted.  If I resume what was interrupted, its a quick restore.  And to switch contexts, I finish saving the interrupted thread's CPU context, then resume another process/thread, switching the scratch register.

(For nested interrupts, I don't allow context switches on resume, but on interrupts after saving current context, I do set up the scratch csr to an interrupt stack of context blocks before re-enabling higher priority interrupts.  Also, as a very minor optimization we can assume that a custom written idle thread doesn't need anything but its pc saved/restored.)

C#/.NET Timers and the Win32 Sleep function are both inexact

Sleep causes the OS to not schedule the thread until the time is up. Note that schedule != run.

Scheduling only adds the thread to a queue so it'll get run eventually, but not always immediately. For instance, if there's already a thread running, you still need to wait for its time slice to finish. If there are higher-priority threads in the queue, those could also run before it.

You should never count on Sleep() lasting exactly the amount of time you give it -- only at least that amount of time.

Timers basically operate the same way, but don't block a thread while they're waiting to be scheduled.

Also, you should be using Environment.TickCount or Stopwatch to measure elapsed time, not DateTime, which is affected by changes to the system time.



Related Topics



Leave a reply



Submit