Linux' hrtimer - microsecond precision?
You can do what you want from user space
- use
clock_gettime()
withCLOCK_REALTIME
to get the time-of-day withnano
-second resolution - use
nanosleep()
to yield the CPU until you are close to the time you need to execute your task (it is at leastmilli
-second resolution). - use a spin loop with
clock_gettime()
until you reach the desired time - execute your task
The clock_gettime()
function is implemented as a VDSO
in recent kernels and modern x86 processors - it takes 20-30 nano
seconds to get the time-of-day with nano
-second resolution - you should be able to call clock_gettime()
over 30 times per micro
-second. Using this method your task should dispatch within 1/30th of a micro
-second of the intended time.
Is gettimeofday() guaranteed to be of microsecond resolution?
Maybe. But you have bigger problems. gettimeofday()
can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday()
is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.
You should look into clock_gettime(CLOCK_MONOTONIC)
for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.
Also, look into the clock_getres()
function.
How to create a high resolution timer in Linux to measure program performance?
Check out clock_gettime
, which is a POSIX interface to high-resolution timers.
If, having read the manpage, you're left wondering about the difference between CLOCK_REALTIME
and CLOCK_MONOTONIC
, see Difference between CLOCK_REALTIME and CLOCK_MONOTONIC?
See the following page for a complete example: http://www.guyrutenberg.com/2007/09/22/profiling-code-using-clock_gettime/
#include <iostream>
#include <time.h>
using namespace std;
timespec diff(timespec start, timespec end);
int main()
{
timespec time1, time2;
int temp;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time1);
for (int i = 0; i< 242000000; i++)
temp+=temp;
clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &time2);
cout<<diff(time1,time2).tv_sec<<":"<<diff(time1,time2).tv_nsec<<endl;
return 0;
}
timespec diff(timespec start, timespec end)
{
timespec temp;
if ((end.tv_nsec-start.tv_nsec)<0) {
temp.tv_sec = end.tv_sec-start.tv_sec-1;
temp.tv_nsec = 1000000000+end.tv_nsec-start.tv_nsec;
} else {
temp.tv_sec = end.tv_sec-start.tv_sec;
temp.tv_nsec = end.tv_nsec-start.tv_nsec;
}
return temp;
}
Related Topics
How to Boot the Linux Kernel Without Creating an Initrd Image
Mmap: Will the Mapped File Be Loaded into Memory Immediately
What Is File Hole and How Can It Be Used
Why Should I Recompile an Entire Program Just for a Library Update
Packaging Proprietary Software for Linux
How to Stop 'Uninterruptible' Process on Linux
How to Change Port Number for Jenkins Installation in Ubuntu 12.04
How to Pipe Output to a File When Running as a Systemd Service
Find Directories Having Size Greater Than X Mb
How to Diff Top Lines of Two Files Without Intermediate File
Calculate Total Used Disk Space by Files Older Than 180 Days Using Find
Object-Oriented Shell for Linux
How to Make Awk Use the Variable Created in Bash Script
How to Print Message to Stderr in Go