Sub-Millisecond Precision Timing in C or C++

Sub-millisecond precision timing in C or C++

When dealing with off-the-shelf operating systems, accurate timing is an extremely difficult and involved task. If you really need guaranteed timing, the only real option is a full real-time operating system. However if "almost always" is good enough, here are a few tricks you can use that will provide good accuracy under commodity Windows & Linux

  1. Use a Sheilded CPU Basically, this means turn off IRQ affinity for a selected CPU & set the processor affinity mask for all other processes on the machine to ignore your targeted CPU. On your app, set the CPU affinity to run only on your shielded CPU. Effectively, this should prevent the OS from ever suspending your app as it will always be the only runnable process for that CPU.
  2. Never allow let your process willingly yield control to the OS (which is inherently non-deterministic for non realtime OSes). No memory allocation, no sockets, no mutexes, nada. Use the RDTSC to spin in a while loop waiting for your target time to arrive. It'll consume 100% CPU but it's the most accurate way to go.
  3. If number 2 is a bit too draconic, you can 'sleep short' and then burn the CPU up to your target time. Here, you take advantage of the fact that the OS schedules the CPU at set intervals. Usually 100 times per second or 1000 times per second depending on your OS and configuration (On windows you can change the default scheduling period of 100/s to 1000/s using the multimedia API). This can be a little hard to get right but essentially you need determine when the OS scheduling periods occur and calculate the one prior to your target wake time. Sleep for this duration and then, upon waking, spin on RDTSC (if you're on a single CPU... use QueryPerformanceCounter or the Linux equivalent if not) until your target time arrives. Occasionally, OS scheduling will cause you to miss but, generally speaking, this mechanism works pretty good.

It seems like a simple question, but attaining 'good' timing get's exponentially more difficult the tighter your timing constraints are. Good luck!

Millisecond timing C++

In Linux, take a look at clock_gettime(). It can essentially give you the time elapsed since an arbitrary point, in nanoseconds (which should be good enough for you).

Note that it is specified by the POSIX standard, so you should be fine using it on Unix-derived systems.

C# sub millisecond timing

You could always try using QueryPerformanceCounter or the StopWatch class for a managed approach

Getting current time with millisecond precision using put_time in C++

Here's one example using some C++11 <chrono> features. If you can use C++20, check out the new <chrono> features for more goodies or take a look at Howard Hinnants Date library.

#include <chrono>
#include <cstdint>
#include <ctime>
#include <iomanip>
#include <iostream>
#include <string>
#include <type_traits>

// A C++11 constexpr function template for counting decimals needed for
// selected precision.
template<std::size_t V, std::size_t C = 0,
typename std::enable_if<(V < 10), int>::type = 0>
constexpr std::size_t log10ish() {
return C;
}

template<std::size_t V, std::size_t C = 0,
typename std::enable_if<(V >= 10), int>::type = 0>
constexpr std::size_t log10ish() {
return log10ish<V / 10, C + 1>();
}

// A class to support using different precisions, chrono clocks and formats
template<class Precision = std::chrono::seconds,
class Clock = std::chrono::system_clock>
class log_watch {
public:
// some convenience typedefs and "decimal_width" for sub second precisions
using precision_type = Precision;
using ratio_type = typename precision_type::period;
using clock_type = Clock;
static constexpr auto decimal_width = log10ish<ratio_type{}.den>();

static_assert(ratio_type{}.num <= ratio_type{}.den,
"Only second or sub second precision supported");
static_assert(ratio_type{}.num == 1, "Unsupported precision parameter");

// default format: "%Y-%m-%dT%H:%M:%S"
log_watch(const std::string& format = "%FT%T") : m_format(format) {}

template<class P, class C>
friend std::ostream& operator<<(std::ostream&, const log_watch<P, C>&);

private:
std::string m_format;
};

template<class Precision, class Clock>
std::ostream& operator<<(std::ostream& os, const log_watch<Precision, Clock>& lw) {
// get current system clock
auto time_point = Clock::now();

// extract std::time_t from time_point
std::time_t t = Clock::to_time_t(time_point);

// output the part supported by std::tm
os << std::put_time(std::localtime(&t), lw.m_format.c_str());

// only involve chrono duration calc for displaying sub second precisions
if(lw.decimal_width) { // if constexpr( ... in C++17
// get duration since epoch
auto dur = time_point.time_since_epoch();

// extract the sub second part from the duration since epoch
auto ss =
std::chrono::duration_cast<Precision>(dur) % std::chrono::seconds{1};

// output the sub second part
os << std::setfill('0') << std::setw(lw.decimal_width) << ss.count();
}

return os;
}

int main() {
// default precision, clock and format
log_watch<> def_cp; // <= C++14
// log_watch def; // >= C++17

// alt. precision using alternative formats
log_watch<std::chrono::milliseconds> milli("%X,");
log_watch<std::chrono::microseconds> micro("%FT%T.");
// alt. precision and clock - only supported if the clock is an alias for
// system_clock
log_watch<std::chrono::nanoseconds,
std::chrono::high_resolution_clock> nano("%FT%T.");

std::cout << "def_cp: " << def_cp << "\n";
std::cout << "milli : " << milli << "\n";
std::cout << "micro : " << micro << "\n";
std::cout << "nano : " << nano << "\n";
}

Example output:

def_cp: 2019-11-21T13:44:07
milli : 13:44:07,871
micro : 2019-11-21T13:44:07.871939
nano : 2019-11-21T13:44:07.871986585

How to print time difference in accuracy of milliseconds and nanoseconds from C in Linux?

Read first the time(7) man page.

Then, you can use clock_gettime(2) syscall (you may need to link -lrt to get it).

So you could try

    struct timespec tstart={0,0}, tend={0,0};
clock_gettime(CLOCK_MONOTONIC, &tstart);
some_long_computation();
clock_gettime(CLOCK_MONOTONIC, &tend);
printf("some_long_computation took about %.5f seconds\n",
((double)tend.tv_sec + 1.0e-9*tend.tv_nsec) -
((double)tstart.tv_sec + 1.0e-9*tstart.tv_nsec));

Don't expect the hardware timers to have a nanosecond accuracy, even if they give a nanosecond resolution. And don't try to measure time durations less than several milliseconds: the hardware is not faithful enough. You may also want to use clock_getres to query the resolution of some clock.

clock() precision in time.h

There are a number of more accurate timers in POSIX.

  • gettimeofday() - officially obsolescent, but very widely available; microsecond resolution.
  • clock_gettime() - the replacement for gettimeofday() (but not necessarily so widely available; on an old version of Solaris, requires -lposix4 to link), with nanosecond resolution.

There are other sub-second timers of greater or lesser antiquity, portability, and resolution, including:

  • ftime() - millisecond resolution (marked 'legacy' in POSIX 2004; not in POSIX 2008).
  • clock() - which you already know about. Note that it measures CPU time, not elapsed (wall clock) time.
  • times() - CLK_TCK or HZ. Note that this measures CPU time for parent and child processes.

Do not use ftime() or times() unless there is nothing better. The ultimate fallback, but not meeting your immediate requirements, is

  • time() - one second resolution.

The clock() function reports in units of CLOCKS_PER_SEC, which is required to be 1,000,000 by POSIX, but the increment may happen less frequently (100 times per second was one common frequency). The return value must be divided by CLOCKS_PER_SEC to get time in seconds.

Time in milliseconds in C

Yes, this program has likely used less than a millsecond. Try using microsecond resolution with timeval.

e.g:

#include <sys/time.h>

struct timeval stop, start;
gettimeofday(&start, NULL);
//do stuff
gettimeofday(&stop, NULL);
printf("took %lu us\n", (stop.tv_sec - start.tv_sec) * 1000000 + stop.tv_usec - start.tv_usec);

You can then query the difference (in microseconds) between stop.tv_usec - start.tv_usec. Note that this will only work for subsecond times (as tv_usec will loop). For the general case use a combination of tv_sec and tv_usec.

Edit 2016-08-19

A more appropriate approach on system with clock_gettime support would be:

struct timespec start, end;
clock_gettime(CLOCK_MONOTONIC_RAW, &start);
//do stuff
clock_gettime(CLOCK_MONOTONIC_RAW, &end);

uint64_t delta_us = (end.tv_sec - start.tv_sec) * 1000000 + (end.tv_nsec - start.tv_nsec) / 1000;

Timing programs in C

You can use the function clock() from <time.h>:

The clock() function returns an approximation of processor time used by the program.

You divide the return value by the macro CLOCKS_PER_SEC to get a time amount in seconds, and if you wanted the time in milliseconds you could multiply the value by 1000, ie:

double get_time_as_ms(void) {
return ((double)(clock() * 1000) / CLOCKS_PER_SEC);
}

Or more accurately, have a start and end clock_t and then calculate the difference:

double duration_as_ms(clock_t start, clock_t end) {
return ((double)(end - start) * 1000) / CLOCKS_PER_SEC;

}

clock_t start = clock(); /* start of program */
/* ... */
clock_t end = clock(); /* end of program */
/* ... */
printf("Duration: %fms\m", duration_as_ms(start, end));

EDIT: Just thought I'd add, a common value for CLOCKS_PER_SEC is 1000000, meaning that the value returned by clock() would be a one-hundred-thousandth of a second. Therefore it's accurate to one one-thousandth of a millisecond (a microsecond).



Related Topics



Leave a reply



Submit