Is Gettimeofday() Guaranteed to Be of Microsecond Resolution

Is gettimeofday() guaranteed to be of microsecond resolution?

Maybe. But you have bigger problems. gettimeofday() can result in incorrect timings if there are processes on your system that change the timer (ie, ntpd). On a "normal" linux, though, I believe the resolution of gettimeofday() is 10us. It can jump forward and backward and time, consequently, based on the processes running on your system. This effectively makes the answer to your question no.

You should look into clock_gettime(CLOCK_MONOTONIC) for timing intervals. It suffers from several less issues due to things like multi-core systems and external clock settings.

Also, look into the clock_getres() function.

What is the precision of the gettimeofday function?

The average microseconds passed between consecutive calls to gettimeofday is usually less than one - on my machine it is somewhere between 0.05 and 0.15.

Modern CPUs usually run at GHz speeds - i.e. billions of instructions per second, and so two consecutive instructions should take on the order of nanoseconds, not microseconds (obviously two calls to a function like gettimeofday is more complex than two simple opcodes, but it should still take on the order of tens of nanoseconds and not more).

But you are performing a division of ints - dividing (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) by MAX_TIMES - which in C will return an int as well, in this case 0.


To get the real measurement, divide by (double)MAX_TIMES (and print the result as a double):

printf("the average time of a gettimeofday function call is: %f us\n", (current_time[MAX_TIMES - 1].tv_usec - current_time[0].tv_usec) / (double)MAX_TIMES);

As a bonus - on Linux systems the reason gettimeofday is so fast (you might imagine it to be a more complex function, calling into the kernel and incurring the overhead of a syscall) is thanks to a special feature called vdso which lets the kernel provide information to user space without going through the kernel at all.

How is the microsecond time of linux gettimeofday() obtained and what is its accuracy?

How is the microsecond resolution/granularity of gettimeofday() accomplished?

Linux runs on many different hardware platforms, so the specifics differ. On a modern x86 platform Linux uses the Time Stamp Counter, also known as the TSC, which is driven by multiple of a crystal oscillator running at 133.33 MHz. The crystal oscillator provides a reference clock to the processor, and the processor multiplies it by some multiple - for example on a 2.93 GHz processor the multiple is 22. The TSC historically was an unreliable source of time because implementations would stop the counter when the processor went to sleep, or because the multiple wasn't constant as the processor shifted multipliers to change performance states or throttle down when it got hot. Modern x86 processors provide a TSC that is constant, invariant, and non-stop. On these processors the TSC is an excellent high resolution clock and the Linux kernel determines an initial approximate frequency at boot time. The TSC provides microsecond resolution for the gettimeofday() system call and nanosecond resolution for the clock_gettime() system call.

How is this synchronization accomplished?

Your first question was about how the Linux clock provides high resolution, this second question is about synchronization, this is the distinction between precision and accuracy. Most systems have a clock that is backed up by battery to keep time of day when the system is powered down. As you might expect this clock doesn't have high accuracy or precision, but it will get the time of day "in the ballpark" when the system starts. To get accuracy most systems use an optional component to get time from an external source on the network. Two common ones are

  1. Network Time Protocol
  2. Precision Time Protocol

These protocols define a master clock on the network (or a tier of clocks sourced by an atomic clock) and then measure network latencies to estimate offsets from the master clock. Once the offset from the master is determined the system clock is disciplined to keep it accurate. This can be done by

  1. Stepping the clock (a relatively large, abrupt, and infrequent time adjustment), or
  2. Slewing the clock (defined as how much the clock frequency should be adjusted by either slowly increasing or decreasing the frequency over a given time period)

The kernel provides the adjtimex system call to allow clock disciplining. For details on how modern Intel multi-core processors keep the TSC synchronized between cores see CPU TSC fetch operation especially in multicore-multi-processor environment.

The relevant kernel source files for clock adjustments are kernel/time.c and kernel/time/timekeeping.c.

What is the unit of gettimeofday()?

The timeval structure has tv_sec, which gives you the absolute value for the seconds, and tv_usec, which gives you the remaining fraction in micro seconds.

So, you could get the resolution in micro seconds.

For more information, http://www.gnu.org/software/libc/manual/html_node/Elapsed-Time.html

Why is microsecond timestamp is repetitive using (a private) gettimeoftheday() i.e. epoch

The change in time you observe is 0.414719 s to 0.430344 s. The difference is 15.615 ms. The fact that the representation of the number is microsecond does not mean that it is incremented by 1 microsecond. In fact I would have expected 15.625 ms. This is the system time increment on standard hardware. I've given a closer look here and here.
This is called granularity of the system time.

Windows:

However, there is a way to improve this, a way to reduce the granularity: The Multimedia Timers. Particulary Obtaining and Setting Timer Resolution will disclose a way to increase the systems interrupt frequency.

The code:

#define TARGET_PERIOD 1         // 1-millisecond target interrupt period


TIMECAPS tc;
UINT wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR)
// this call queries the systems timer hardware capabilities
// it returns the wPeriodMin and wPeriodMax with the TIMECAPS structure
{
// Error; application can't continue.
}

// finding the minimum possible interrupt period:

wTimerRes = min(max(tc.wPeriodMin, TARGET_PERIOD ), tc.wPeriodMax);
// and setting the minimum period:

timeBeginPeriod(wTimerRes);

This will force the system to run at its maximum interrupt frequency. As a consequence
also the update of the system time will happen more often and the granularity of the system time increment will be close to 1 milisecond on most systems.

When you deserve resolution/granularity beyond this, you'd have to look into QueryPerformanceCounter. But this is to be used with care when using it over longer periods of time. The frequency of this counter can be obtained by a call to QueryPerformanceFrequency. The OS considers this frequency as a constant and will give the same value all time. However, some hardware produces this frequency and the true frequency differs from the given value. It has an offset and it shows thermal drift. Thus the error shall be assumed in the range of several to many microseconds/second. More details about this can be found in the second "here" link above.

Linux:

The situation looks somewhat different for Linux. See this to get an idea. Linux
mixes information of the CMOS clock using the function getnstimeofday (for seconds since epoch) and information from a high freqeuncy counter (for the microseconds) using the function timekeeping_get_ns. This is not trivial and is questionable in terms of accuracy since both sources are backed by different hardware. The two sources are not phase locked, thus it is possible to get more/less than one million microseconds per second.

Get a timestamp in C in microseconds?

You need to add in the seconds, too:

unsigned long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec;

Note that this will only last for about 232/106 =~ 4295 seconds, or roughly 71 minutes though (on a typical 32-bit system).

High precision timing in userspace in Linux

clock_gettime allows you to get a nanosecond-precise time from the thread start, process start or epoch.

How to measure time in milliseconds using ANSI C?

There is no ANSI C function that provides better than 1 second time resolution but the POSIX function gettimeofday provides microsecond resolution. The clock function only measures the amount of time that a process has spent executing and is not accurate on many systems.

You can use this function like this:

struct timeval tval_before, tval_after, tval_result;

gettimeofday(&tval_before, NULL);

// Some code you want to time, for example:
sleep(1);

gettimeofday(&tval_after, NULL);

timersub(&tval_after, &tval_before, &tval_result);

printf("Time elapsed: %ld.%06ld\n", (long int)tval_result.tv_sec, (long int)tval_result.tv_usec);

This returns Time elapsed: 1.000870 on my machine.



Related Topics



Leave a reply



Submit