How Is the Microsecond Time of Linux Gettimeofday() Obtained and What Is Its Accuracy

How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

How is the microsecond resolution/granularity of gettimeofday() accomplished?

Linux runs on many different hardware platforms, so the specifics differ. On a modern x86 platform Linux uses the Time Stamp Counter, also known as the TSC, which is driven by multiple of a crystal oscillator running at 133.33 MHz. The crystal oscillator provides a reference clock to the processor, and the processor multiplies it by some multiple - for example on a 2.93 GHz processor the multiple is 22. The TSC historically was an unreliable source of time because implementations would stop the counter when the processor went to sleep, or because the multiple wasn't constant as the processor shifted multipliers to change performance states or throttle down when it got hot. Modern x86 processors provide a TSC that is constant, invariant, and non-stop. On these processors the TSC is an excellent high resolution clock and the Linux kernel determines an initial approximate frequency at boot time. The TSC provides microsecond resolution for the gettimeofday() system call and nanosecond resolution for the clock_gettime() system call.

How is this synchronization accomplished?

Your first question was about how the Linux clock provides high resolution, this second question is about synchronization, this is the distinction between precision and accuracy. Most systems have a clock that is backed up by battery to keep time of day when the system is powered down. As you might expect this clock doesn't have high accuracy or precision, but it will get the time of day "in the ballpark" when the system starts. To get accuracy most systems use an optional component to get time from an external source on the network. Two common ones are

  1. Network Time Protocol
  2. Precision Time Protocol

These protocols define a master clock on the network (or a tier of clocks sourced by an atomic clock) and then measure network latencies to estimate offsets from the master clock. Once the offset from the master is determined the system clock is disciplined to keep it accurate. This can be done by

  1. Stepping the clock (a relatively large, abrupt, and infrequent time adjustment), or
  2. Slewing the clock (defined as how much the clock frequency should be adjusted by either slowly increasing or decreasing the frequency over a given time period)

The kernel provides the adjtimex system call to allow clock disciplining. For details on how modern Intel multi-core processors keep the TSC synchronized between cores see CPU TSC fetch operation especially in multicore-multi-processor environment.

The relevant kernel source files for clock adjustments are kernel/time.c and kernel/time/timekeeping.c.

Is gettimeofday() guaranteed to be of microsecond resolution?

Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.

You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.

Also, look into the clock_getres() function.

What is the precision of the gettimeofday function?

The average microseconds passed between consecutive calls to gettimeofday is usually less than one - on my machine it is somewhere between 0.05 and 0.15.

Modern CPUs usually run at GHz speeds - i.e. billions of instructions per second, and so two consecutive instructions should take on the order of nanoseconds, not microseconds (obviously two calls to a function like gettimeofday is more complex than two simple opcodes, but it should still take on the order of tens of nanoseconds and not more).

But you are performing a division of ints - dividing (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) by MAX_TIMES - which in C will return an int as well, in this case 0.


To get the real measurement, divide by (double)MAX_TIMES (and print the result as a double):

printf("the average time of a gettimeofday function call is: %f us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / (double)MAX_TIMES);

As a bonus - on Linux systems the reason gettimeofday is so fast (you might imagine it to be a more complex function, calling into the kernel and incurring the overhead of a syscall) is thanks to a special feature called vdso which lets the kernel provide information to user space without going through the kernel at all.

What is the unit of gettimeofday()?

The timeval structure has tv_sec, which gives you the absolute value for the seconds, and tv_usec, which gives you the remaining fraction in micro seconds.

So, you could get the resolution in micro seconds.

For more information, http://www.gnu.org/software/libc/manual/html_node/Elapsed-Time.html

Microsecond accurate (or better) process timing in Linux

If you are looking for this level of timing resolution, you are probably trying to do some micro-optimization. If that's the case, you should look at PAPI. Not only does it provide both wall-clock and virtual (process only) timing information, it also provides access to CPU event counters, which can be indispensable when you are trying to improve performance.

http://icl.cs.utk.edu/papi/

How clock_gettime achieves nano seconds resolution?

Modern CPUs run at several GHz clock frequency. A frequency of 1 GHz equals a clock period of 1 ns. So running a (wide) counter at 1 GHz gives a time resolution in nanoseconds. This does not mean that the time is as accurate as it is displayed. The value has just such a high resolution.

Granularity in time function

Call it in a tight loop, note the difference between the current value and the previous one when it changes. Something like the following:

#include <stdio.h>
#include <sys/time.h>

int main()
{
struct timeval timevals[2];
int cur = 0;
while (1) {
gettimeofday(&timevals[cur], NULL);
int diff = timevals[cur].tv_usec - timevals[!cur].tv_usec;
if (diff)
printf("%d\n", diff);
cur = !cur;
}
}

On my system it seems the granularity is around 2 usec (about 50/50 one or two microseconds, with outliers in the hundreds or thousands that are likely due to task switching).

What are the sources or clock for Linux time functions (time , gettimeofday ...)

It varies depending on a large number of factors including what hardware is available, whether time synchronization is in use, and a number of other factors. On typical modern hardware, the TSC or HPET is read and scaled according to factors maintained by the kernel's timekeeping system.



Related Topics



Leave a reply



Submit