Resolution of std::chrono::high_resolution_clock doesn't correspond to measurements
I'm going to guess you are using Visual Studio 2012. If not, disregard this answer. Visual Studio 2012 typedef
's high_resolution_clock
to system_clock
. Sadly, this means it has crappy precision (around 1 ms). I wrote a better high-resolution clock which uses QueryPerformanceCounter
for use in Visual Studio 2012...
HighResClock.h:
struct HighResClock
{
typedef long long rep;
typedef std::nano period;
typedef std::chrono::duration<rep, period> duration;
typedef std::chrono::time_point<HighResClock> time_point;
static const bool is_steady = true;
static time_point now();
};
HighResClock.cpp:
namespace
{
const long long g_Frequency = []() -> long long
{
LARGE_INTEGER frequency;
QueryPerformanceFrequency(&frequency);
return frequency.QuadPart;
}();
}
HighResClock::time_point HighResClock::now()
{
LARGE_INTEGER count;
QueryPerformanceCounter(&count);
return time_point(duration(count.QuadPart * static_cast<rep>(period::den) / g_Frequency));
}
(I left out an assert and #ifs to see if it's being compiled on Visual Studio 2012 from the above code.)
You can use this clock anywhere and in the same way as standard clocks.
Does standard C++11 guarantee that high_resolution_clock measure real time (non CPU-cycles)?
Short answer: as of the C++14 standard, high_resolution_clock
does NOT explicitly provide the guarantee you're looking for.
For now, steady_clock
and system_clock
provide better and more explicit guarantees. However, most implementations probably will ensure that HRC advances while its thread is sleeping. It may nevertheless be preferable to do your own type-aliasing. See 'EDIT' sections below and discussion in comments.
Long answer:
The draft standard does in fact implicitly acknowledge (in note 30.2.4 "Timing Specifications", note 5) that Clock objects are not required to advance while their associated thread is sleeping. For context, this section is explaining how the standard-library timer objects work; the behavior of a timer is based on the behavior of the clock used to set it.
[ Note: If the clock is not synchronized with a steady clock, e.g., a
CPU time clock, these timeouts might not provide useful functionality.
— end note ]
Note that in this case, "timeouts might not provide useful functionality" means that if you use a timer to sleep_until
a particular clock time using an unsynchronized (non-realtime) clock, your thread will not wake up. So the note above is a bit of an understatement.
And, indeed, there is nothing in the Clock specification (20.13.3) that actually requires synchronization with a steady clock.
However, the standard appears to implicitly condone two potential aliases for high_resolution_clock
in the definition in 20.13.7.3:
high_resolution_clock
may be a synonym forsystem_clock
or
steady_clock
.
steady_clock
is, of course, steady. system_clock
is not, because the system time could change (e.g. as the result of an NTP update) while the program is running.
However, system_clock
(20.13.7.1) is still a "realtime" clock:
Objects of class
system_clock
represent wall clock time from the
system-wide realtime clock.
So system_clock
will not stop advancing when your thread sleeps.
This confirms Nicol Bolas's point that a is_steady
may be false for high_resolution_clock
even if the clock behaves as you expect (i.e. it advances regardless of the state of its associated thread).
Based on this, it seems reasonable to expect most mainstream implementations to use a realtime (i.e. synchronized) clock of some sort for high_resolution_clock
. Implementations are designed to be useful, after all, and a clock is generally less useful if it's not realtime, especially if it's used with timers as per the note on "useful functionality" above.
Since it's not guaranteed, however, you should check the behavior and/or documentation of each implementation you want to use.
EDIT: I've started a discussion on the ISO C++ Standards group on the issue, suggesting that this is a bug in the standard. The first reply, from Howard Hinnant, who takes credit for putting it in the standard, is worth quoting:
I would not be opposed to deprecating
high_resolution_clock
, with the intent to remove it after a suitable period of deprecation. The reality is that it is always a typedef to eithersteady_clock
orsystem_clock
, and the programmer is better off choosing one of those two and know what he’s getting, than choosehigh_resolution_clock
and get some other clock by a roll of the dice.
...So the moral, according to Hinnant, is don't use high_resolution_clock
.
EDIT 2:
The problem with high_resolution_clock
according to Hinnant is not so much that you're likely to run into a problem with HRC (although that is possible even with a conforming compiler, as per the argument above), but that since you're typically not actually getting a lower resolution than you could with the one of the other two clocks (though you'll need to manually compare their resolutions in a type-alias or typedef to get a "maximum resolution" non-sleeping clock), there's no concrete benefit. So you need to weigh the risk of having threads sleep forever on conforming implementations versus the semantic benefit of the name high_resolution_clock
and the simplicity/brevity benefit of avoiding just making your own typedef or type-alias.
Here's some actual code for various approaches:
Use
static_assert
to check whetherhigh_resolution_clock
is actually aliased to a real clock. This will probably never fire, which means that you're automatically getting the highest-resolution "realtime" clock without messing with your own typedefs:static_assert(
std::is_same<high_resolution_clock, steady_clock>::value
|| std::is_same<high_resolution_clock, system_clock>::value,
"high_resolution_clock IS NOT aliased to one of the other standard clocks!");Use the HRC if
high_resolution_clock::is_steady
is true; otherwise prefer the higher-resolution clock betweensystem_clock
andsteady_clock
. NOTE that ifhigh_resolution_clock::is_steady
is false, this probably just means that the HRC is aliased tosystem_clock
, in which case you'll ultimately end up with a new type-alias that is actually the same type ashigh_resolution_clock
. However, creating your own type-alias makes this explicit and guarantees that even a malicious-but-conforming implementation won't have the issue outlined above.using maxres_sys_or_steady =
std::conditional<
system_clock::period::den <= steady_clock::period::den,
system_clock, steady_clock
>::type;
using maxres_nonsleeping_clock =
std::conditional<
high_resolution_clock::is_steady,
high_resolution_clock, maxres_sys_or_steady
>::type;
What are the uses of std::chrono::high_resolution_clock?
There are none.
Sorry, my bad.
If you are tempted to use high_resolution_clock
, choose steady_clock
instead. On libc++ and VS high_resolution_clock
is a type alias of steady_clock
anyway.
On gcc high_resolution_clock
is a type alias of system_clock
and I've seen more than one use of high_resolution_clock::to_time_t
on this platform (which is wrong).
Do use <chrono>
. But there are parts of <chrono>
that you should avoid.
- Don't use
high_resolution_clock
. - Avoid uses of
.count()
and.time_since_epoch()
unless there is no other way to get the job done. - Avoid
duration_cast
unless the code won't compile without it, and you desire truncation-towards-zero behavior. - Avoid explicit conversion syntax if an implicit conversion compiles.
How to get the precision of high_resolution_clock?
The minimum representable duration is high_resolution_clock::period::num / high_resolution_clock::period::den
seconds. You can print it like this:
std::cout << (double) std::chrono::high_resolution_clock::period::num
/ std::chrono::high_resolution_clock::period::den;
Why is this? A clock's ::period
member is defined as "The tick period of the clock in seconds." It is a specialization of std::ratio
which is a template to represent ratios at compile-time. It provides two integral constants: num
and den
, the numerator and denominator of a fraction, respectively.
high_resolution_clock's highest resolution is 1000ns
The precision of clock is hardware and operating system dependent, on x86 platform running linux microseconds is quite normal. On my red hat 6 with 2.6.30 kernel, I can only get about 10µs.
To get better resolution, you'll need a real time operating system.
Related Topics
Rotate an Image Without Cropping in Opencv in C++
High Resolution Timer With C++ and Linux
What New Capabilities Do User-Defined Literals Add to C++
Why Are String Literals L-Value While All Other Literals Are R-Value
Convert an Int to Ascii Character
How to Write Std::String to File
C++11 Features in Visual Studio 2012
Do Unused Functions Get Optimized Out
Why Does Printf() Promote a Float to a Double
Size_T' VS 'Container::Size_Type'
When Is a Private Constructor Not a Private Constructor
Visual Studio 2015 "Non-Standard Syntax; Use '&' to Create a Pointer to Member"