How to Get the Current Windows System-Wide Timer Resolution

How to get the current Windows system-wide timer resolution

Windows timer resolution is provided by the hidden API call:

NTSTATUS NtQueryTimerResolution(OUT PULONG MinimumResolution, 
OUT PULONG MaximumResolution,
OUT PULONG ActualResolution);

NtQueryTimerResolution is exported by the native Windows NT library NTDLL.DLL.

Common hardware platforms report 156,250 or 100,144 for ActualResolution; older platforms may report even larger numbers; newer systems, particulary when HPET (High Precision Event Timer) or constant/invariant TSC are supported, may return 156,001 for ActualResolution.

Calls to timeBeginPeriod(n) are reflected in ActualResolution.

More details in this answer.

How to determine the current Windows timer resolution?

Windows timer resolution is provided by the hidden API call:

NTSTATUS NtQueryTimerResolution(OUT PULONG MinimumResolution, 
OUT PULONG MaximumResolution,
OUT PULONG ActualResolution);

NtQueryTimerResolution is exported by the native Windows NT library NTDLL.DLL.

Common hardware platforms report 156,250 or 100,144 for ActualResolution; older platforms may report even larger numbers; newer systems, particulary when HPET (High Precision Event Timer) or constant/invariant TSC are supported, may return 156,001 for ActualResolution.

Calls to timeBeginPeriod(n) are reflected in ActualResolution.

More details in this answer.

Changing Qt's timer resolution on Windows

From the Qt documentation about QObject::startTimer(int interval, Qt::TimerType timerType = Qt::CoarseTimer) :

Note that QTimer's accuracy depends on the underlying operating system
and hardware. The timerType argument allows you to customize the
accuracy of the timer. See Qt::TimerType for information on the
different timer types. Most platforms support an accuracy of 20
milliseconds; some provide more. If Qt is unable to deliver the
requested number of timer events, it will silently discard some.

I think that you can set timer type to Qt::PreciseTimer. On Windows, Qt will use Windows's Multimedia timer facility (if available) for this type which has a resolution of 1 millisecond.

Precise capture loop

You see a 15 ms delays when sleeping for 15ms and a 30 ms delays when sleeping for 16ms because SpinWait uses under the hood Environment.TickCount which relies on the system clock which has apparently on your system a 15ms resolution.
You can set the system wide timer resolution by using timeBeginPeriod.
See

  • https://learn.microsoft.com/en-us/windows/win32/api/timeapi/nf-timeapi-timebeginperiod

and

  • https://randomascii.wordpress.com/2020/10/04/windows-timer-resolution-the-great-rule-change/

for more in depth information about the clock resolution. You can check your current sytem wide timer resolution with clockres from sysinternals.

  • https://learn.microsoft.com/en-us/sysinternals/downloads/clockres

See example output:

C:>clockres

Clockres v2.1 - Clock resolution display utility
Copyright (C) 2016 Mark Russinovich
Sysinternals

Maximum timer interval: 15.625 ms
Minimum timer interval: 0.500 ms
**Current timer interval: 15.625 ms**

When a WPF application is running (e.g. Visual Studio)

Maximum timer interval: 15.625 ms
Minimum timer interval: 0.500 ms
**Current timer interval: 1.000 ms**

then you get a 1ms resolution because every WPF application changes the clock resolution to 1ms. This is also used by some guys as workaround to "fix" the issue.

How can I get the Windows system time with millisecond resolution?

GetTickCount will not get it done for you.

Look into QueryPerformanceFrequency / QueryPerformanceCounter. The only gotcha here is CPU scaling though, so do your research.

Limits of Windows Queue Timers

The Windows default timer resolution is 15.625 ms. That is the granularity you observe.
However, the system timer resolution can be modified as described by MSDN: Obtaining and Setting Timer Resolution. This allows to reduce the granularity to about 1 ms on most
platforms. This SO answer discloses how to obtain the current system timer resolution.

The hidden function NtSetTimerResolution(...) even allows to set the timer resolution to 0.5 ms when supported by the platform. See this SO answer to the question "How to setup timer resolution to 0.5 ms?"

...a different config?
It depends on underlying hardware and OS version. Check the timer resolution with the tooles mentioned above.

...all as fast or faster than my machine running 64bit Win7)?
Yes you can. However, other applications are also allowed to set the timer resolution. Google Chrome is a known example. Such other application may also only temporarily change the timer resolution. Therefore you can never rely on the timer resolution
being a constant across platforms/time.
The only way to be sure that the timer resolution is
controlled by your application is to set the timer granularity to the minimum of 1 ms (0.5ms) by yourself.

Note: Reducing the system timer granularity causes the systems interrupt frequency to increase. It reduces the thread quantum (time slice) and increases the power consumption.

Windows Timer Resolution vs Application Priority vs Processor scheduling

timeBeginPeriod() is the documented api to do this. It is documented to affect the accuracy for Sleep(). Dave Cutler probably did not enjoy implementing it, but allowing Win 3.1 code to port made it necessary. The multi-media api back then was necessary to keep anemic hardware with small buffers going without stuttering.

Very crude, but there is no other good way to do it in the kernel. The normal state for a processor core is to be stopped on a HLT instruction. Consuming (almost) no power, the only way to revive it is with a hardware interrupt. Which is what it does, it cranks up the clock interrupt rate. Normally ticks 64 times per second, you can jack it up to 1000 with timeBeginPeriod, 2000 with the native api.

And yes, pretty bad for power consumption. The clock interrupt handler also activates the thread scheduler, an fairly unsubtle chunk of code. The reason why a Sleep() call can now wake up at (almost) the clock interrupt rate. Tinkered with in Win8.1 btw, the only thing I noticed about the changes is that it is not quite as responsive anymore and a 1 msec rate can cause up to 2 msec delays.

Chrome is indeed notorious for ab/using the heck out of it. I always assumed that it provided a competitive edge for a company that does big business in mobile operating systems and battery-powered devices. The guy that started this web site noticed something was wrong. The more responsible thing to do for a browser is to bump up the rate to 10 msec, necessary to get accurate GIF animation. Multi-media playback does not need it anymore.

This otherwise has no effect at all on scheduling priorities. One detail I did not check is if the thread quantum changes correspondingly (the number of ticks a thread may own a core before being evicted, 3 for a workstation). I suspect it does.



Related Topics



Leave a reply



Submit