How to Get the Precision of High_Resolution_Clock

How to get the precision of high_resolution_clock?

The minimum representable duration is high_resolution_clock::period::num / high_resolution_clock::period::den seconds. You can print it like this:

std::cout << (double) std::chrono::high_resolution_clock::period::num
/ std::chrono::high_resolution_clock::period::den;

Why is this? A clock's ::period member is defined as "The tick period of the clock in seconds." It is a specialization of std::ratio which is a template to represent ratios at compile-time. It provides two integral constants: num and den, the numerator and denominator of a fraction, respectively.

Resolution of std::chrono::high_resolution_clock doesn't correspond to measurements

I'm going to guess you are using Visual Studio 2012. If not, disregard this answer. Visual Studio 2012 typedef's high_resolution_clock to system_clock. Sadly, this means it has crappy precision (around 1 ms). I wrote a better high-resolution clock which uses QueryPerformanceCounter for use in Visual Studio 2012...

HighResClock.h:

    struct HighResClock
{
typedef long long rep;
typedef std::nano period;
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<HighResClock> time_point;
static const bool is_steady = true;

static time_point now();
};

HighResClock.cpp:

namespace
{
const long long g_Frequency = []() -> long long
{
LARGE_INTEGER frequency;
QueryPerformanceFrequency(&frequency);
return frequency.QuadPart;
}();
}

HighResClock::time_point HighResClock::now()
{
LARGE_INTEGER count;
QueryPerformanceCounter(&count);
return time_point(duration(count.QuadPart * static_cast<rep>(period::den) / g_Frequency));
}

(I left out an assert and #ifs to see if it's being compiled on Visual Studio 2012 from the above code.)

You can use this clock anywhere and in the same way as standard clocks.

Does standard C++11 guarantee that high_resolution_clock measure real time (non CPU-cycles)?

Short answer: as of the C++14 standard, high_resolution_clock does NOT explicitly provide the guarantee you're looking for.

For now, steady_clock and system_clock provide better and more explicit guarantees. However, most implementations probably will ensure that HRC advances while its thread is sleeping. It may nevertheless be preferable to do your own type-aliasing. See 'EDIT' sections below and discussion in comments.

Long answer:

The draft standard does in fact implicitly acknowledge (in note 30.2.4 "Timing Specifications", note 5) that Clock objects are not required to advance while their associated thread is sleeping. For context, this section is explaining how the standard-library timer objects work; the behavior of a timer is based on the behavior of the clock used to set it.

[ Note: If the clock is not synchronized with a steady clock, e.g., a
CPU time clock, these timeouts might not provide useful functionality.
end note ]

Note that in this case, "timeouts might not provide useful functionality" means that if you use a timer to sleep_until a particular clock time using an unsynchronized (non-realtime) clock, your thread will not wake up. So the note above is a bit of an understatement.

And, indeed, there is nothing in the Clock specification (20.13.3) that actually requires synchronization with a steady clock.

However, the standard appears to implicitly condone two potential aliases for high_resolution_clock in the definition in 20.13.7.3:

high_resolution_clock may be a synonym for system_clock or
steady_clock.

steady_clock is, of course, steady. system_clock is not, because the system time could change (e.g. as the result of an NTP update) while the program is running.

However, system_clock (20.13.7.1) is still a "realtime" clock:

Objects of class system_clock represent wall clock time from the
system-wide realtime clock.

So system_clock will not stop advancing when your thread sleeps.
This confirms Nicol Bolas's point that a is_steady may be false for high_resolution_clock even if the clock behaves as you expect (i.e. it advances regardless of the state of its associated thread).

Based on this, it seems reasonable to expect most mainstream implementations to use a realtime (i.e. synchronized) clock of some sort for high_resolution_clock. Implementations are designed to be useful, after all, and a clock is generally less useful if it's not realtime, especially if it's used with timers as per the note on "useful functionality" above.

Since it's not guaranteed, however, you should check the behavior and/or documentation of each implementation you want to use.

EDIT: I've started a discussion on the ISO C++ Standards group on the issue, suggesting that this is a bug in the standard. The first reply, from Howard Hinnant, who takes credit for putting it in the standard, is worth quoting:

I would not be opposed to deprecating high_resolution_clock, with the intent to remove it after a suitable period of deprecation. The reality is that it is always a typedef to either steady_clock or system_clock, and the programmer is better off choosing one of those two and know what he’s getting, than choose high_resolution_clock and get some other clock by a roll of the dice.

...So the moral, according to Hinnant, is don't use high_resolution_clock.

EDIT 2:

The problem with high_resolution_clock according to Hinnant is not so much that you're likely to run into a problem with HRC (although that is possible even with a conforming compiler, as per the argument above), but that since you're typically not actually getting a lower resolution than you could with the one of the other two clocks (though you'll need to manually compare their resolutions in a type-alias or typedef to get a "maximum resolution" non-sleeping clock), there's no concrete benefit. So you need to weigh the risk of having threads sleep forever on conforming implementations versus the semantic benefit of the name high_resolution_clock and the simplicity/brevity benefit of avoiding just making your own typedef or type-alias.

Here's some actual code for various approaches:

  • Use static_assert to check whether high_resolution_clock is actually aliased to a real clock. This will probably never fire, which means that you're automatically getting the highest-resolution "realtime" clock without messing with your own typedefs:

     static_assert(
    std::is_same<high_resolution_clock, steady_clock>::value
    || std::is_same<high_resolution_clock, system_clock>::value,
    "high_resolution_clock IS NOT aliased to one of the other standard clocks!");
  • Use the HRC if high_resolution_clock::is_steady is true; otherwise prefer the higher-resolution clock between system_clock and steady_clock. NOTE that if high_resolution_clock::is_steady is false, this probably just means that the HRC is aliased to system_clock, in which case you'll ultimately end up with a new type-alias that is actually the same type as high_resolution_clock. However, creating your own type-alias makes this explicit and guarantees that even a malicious-but-conforming implementation won't have the issue outlined above.

    using maxres_sys_or_steady =
    std::conditional<
    system_clock::period::den <= steady_clock::period::den,
    system_clock, steady_clock
    >::type;
    using maxres_nonsleeping_clock =
    std::conditional<
    high_resolution_clock::is_steady,
    high_resolution_clock, maxres_sys_or_steady
    >::type;

How accurate is std::chrono?

It's most likely hardware and OS dependent. For example when I ask Windows what the clock frequency is using QueryPerformanceFrequency() I get 3903987, which if you take the inverse of that you get a clock period or resolution of about 256 nanoseconds. This is the value that that my operating system reports.

With std::chrono according to the docs the minimum representable duration is high_resolution_clock::period::num / high_resolution_clock::period::den.

The num and den are numerator and denominator. std::chrono::high_resolution_clock tells me the numerator is 1, and the denominator is 1 billion, supposedly corresponding to 1 nanosecond:

std::cout << (double)std::chrono::high_resolution_clock::period::num /   
std::chrono::high_resolution_clock::period::den; // Results in a nanosecond.

So according to the std::chrono I have one nanosecond resolution but I don't believe it because the native OS system call is more likely to be reporting the more accurate frequency/period.

What precision is used by std::chrono::system_clock?

No it is not guaranteed. You can use the clocks period member alias to get tick period in seconds:

#include <chrono>
#include <iostream>

int main() {
std::cout << std::chrono::system_clock::period::num << " / " << std::chrono::system_clock::period::den;
}

Possible output:

1 / 1000000000

platform-specific std::chrono::high_resolution_clock::period::num

There are three implementations of std::chrono::high_resolution_clock that I am aware of: Visual Studio, gcc and clang (when used with libc++).

All three of these have nanosecond-precision (std::chrono::high_resolution_clock::period::num = 1). For VS and libc++, high_resolution_clock is type-aliased to steady_clock. On gcc it is type-aliased to system_clock.

There is nothing in the spec that prevents std::chrono::high_resolution_clock::period::num != 1, and you are correct that on such a system 1 second would not be representable in "ticks". This further translates to:

seconds would not be implicitly convertible to high_resolution_clock::duration.

To find the coarsest duration to which both seconds and high_resolution_clock::duration are convertible to, you can portably use:

using CT = common_type_t<seconds, high_resolution_clock::duration>;

For all of the implementations I'm aware of, CT is a type-alias for nanoseconds.



Related Topics



Leave a reply



Submit