Difference between std::system_clock and std::steady_clock?
From N3376:
20.11.7.1 [time.clock.system]/1:
Objects of class
system_clock
represent wall clock time from the system-wide realtime clock.
20.11.7.2 [time.clock.steady]/1:
Objects of class
steady_clock
represent clocks for which values oftime_point
never decrease as physical time advances and for which values oftime_point
advance at a steady rate relative to real time. That is, the clock may not be adjusted.
20.11.7.3 [time.clock.hires]/1:
Objects of class
high_resolution_clock
represent clocks with the shortest tick period.high_resolution_clock
may be a synonym forsystem_clock
orsteady_clock
.
For instance, the system wide clock might be affected by something like daylight savings time, at which point the actual time listed at some point in the future can actually be a time in the past. (E.g. in the US, in the fall time moves back one hour, so the same hour is experienced "twice") However, steady_clock
is not allowed to be affected by such things.
Another way of thinking about "steady" in this case is in the requirements defined in the table of 20.11.3 [time.clock.req]/2:
In Table 59
C1
andC2
denote clock types.t1
andt2
are values returned byC1::now()
where the call returningt1
happens before the call returningt2
and both of these calls occur beforeC1::time_point::max()
. [ Note: this meansC1
did not wrap around betweent1
andt2
. —end note ]Expression:
C1::is_steady
Returns:const bool
Operational Semantics:true
ift1 <= t2
is always true and the time between clock ticks is constant, otherwisefalse
.
That's all the standard has on their differences.
If you want to do benchmarking, your best bet is probably going to be std::high_resolution_clock
, because it is likely that your platform uses a high resolution timer (e.g. QueryPerformanceCounter
on Windows) for this clock. However, if you're benchmarking, you should really consider using platform specific timers for your benchmark, because different platforms handle this differently. For instance, some platforms might give you some means of determining the actual number of clock ticks the program required (independent of other processes running on the same CPU). Better yet, get your hands on a real profiler and use that.
Difference between steady_clock vs system_clock?
Answering questions in reverse order:
What is the difference between
steady_clock
vssystem_clock
in layman
terms.
If you're holding a system_clock
in your hand, you would call it a watch, and it would tell you what time it is.
If you're holding a steady_clock
in your hand, you would call it a stopwatch, and it would tell you how fast someone ran a lap, but it would not tell you what time it is.
If you had to, you could time someone running a lap with your watch. But if your watch (like mine) periodically talked to another machine (such as the atomic clock in Boulder CO) to correct itself to the current time, it might make minor mistakes in timing that lap. The stopwatch won't make that mistake, but it also can't tell you what the correct current time is.
Does my above code look right?
No. And even if it gave you reasonable answers, I would not say it is right. Don't feel bad, this is a beginner mistake that lots of people make with the <chrono>
library.
There is a simple rule I follow with the <chrono>
library. The rule is actually not completely correct (thus it is a guideline). But it is close enough to correct to be a guideline that is nearly always followed:
Don't use
count()
.
And a corollary:
Don't use
time_since_epoch()
.
The <chrono>
library is designed around a type-safe system meant to protect you from units conversions mistakes. If you accidentally attempt an unsafe conversion, the error is caught at compile time (as opposed to it being a run time error).
The member functions count()
and time_since_epoch()
are "escape hatches" out of this type-safe system ... to be used only in cases of emergency. Such emergencies arise when (for example) the committee neglects to give you all the tools you need to get the job done (such as I/O) for the <chrono>
types, or such as the need to interface with some other timing API via integers.
Review your code and other's for use of count()
and time_since_epoch()
and scrutinize each use of these functions: Is there any way the code could be rewritten to eliminate their use?
Reviewing the first line of your code:
uint64_t now = duration_cast<milliseconds>(steady_clock::now().time_since_epoch()).count();
now
is a time_point
(from steady_clock
). It units are milliseconds
, but at this time I'm not convinced that the units are important. What is important is that now
is a time_point
retrieved from steady_clock
:
auto now = steady_clock::now();
Your second line is more complicated:
bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));
Let's start with data_holder->getTimestamp()
: If you can modify getTimestamp()
, you should modify it to return a time_point
instead of a uint64_t
. To do so, you will have to know the correct units (which you do -- milliseconds), and you will have to know the correct epoch. The epoch is the time point against which your milliseconds are measured from.
In this case 1437520382241ms is about 45.6 years. Assuming this is a recent time stamp, 45.6 years ago was very close to 1970-01-01. As it turns out, every implementation of system_clock()
uses 1970-01-01 as its epoch (though each implementation counts different units from this epoch).
So either modify getTimestamp()
to return a time_point<system_clock, milliseconds>
, or wrap the return of getTimestamp()
with time_point<system_clock, milliseconds>
:
auto dh_ts = system_clock::time_point{milliseconds{data_holder->getTimestamp()}};
Now your second line is down to:
bool is_old = (120 * 1000 < (now - dh_ts));
Another good guideline:
If you see conversion factors in your
<chrono>
code, you're doing it wrong.<chrono>
lives for doing the conversions for you.
bool is_old = (minutes{2} < (now - dh_ts));
This next step is stylistic, but now your code is simple enough to get rid of your excess parentheses if that is something that appeals to you:
bool is_old = minutes{2} < now - dh_ts;
If you were able to modify the getTimestamp()
to return a type-safe value this code could also look like:
bool is_old = minutes{2} < now - data_holder->getTimestamp();
Alas, either way, this still does not compile! The error message should state something along the lines that that there is no valid operator-()
between now
and dh_ts
.
This is the type-safety system coming in to save you from run time errors!
The problem is that time_point
s from system_clock
can't be subtracted from time_point
s from steady_clock
(because the two have different epochs). So you have to switch to:
auto now = system_clock::now();
Putting it all together:
#include <chrono>
#include <cstdint>
#include <memory>
struct DataHolder
{
std::chrono::system_clock::time_point
getTimestamp()
{
using namespace std::chrono;
return system_clock::time_point{milliseconds{1437520382241}};
}
};
int
main()
{
using namespace std;
using namespace std::chrono;
auto data_holder = std::unique_ptr<DataHolder>(new DataHolder);
auto now = system_clock::now();
bool is_old = minutes{2} < now - data_holder->getTimestamp();
}
And in C++14 that last line can be made a little more concise:
bool is_old = 2min < now - data_holder->getTimestamp();
In summary:
- Refuse to use
count()
(except for I/O). - Refuse to use
time_since_epoch()
(except for I/O). - Refuse to use conversion factors (such as 1000).
- Argue with it until it compiles.
If you succeed in the above four points, you will most likely not experience any run time errors (but you will get your fair share of compile time errors).
What are the pros & cons of the different C++ clocks for logging time stamps?
system_clock
is a clock that keeps time with UTC (excluding leap seconds). Every once in a while (maybe several times a day), it gets adjusted by small amounts, to keep it aligned with the correct time. This is often done with a network service such as NTP. These adjustments are typically on the order of microseconds, but can be either forward or backwards in time. It is actually possible (though not likely nor common) for timestamps from this clock to go backwards by tiny amounts. Unless abused by an administrator, system_clock
does not jump by gross amounts, say due to daylight saving, or changing the computer's local time zone, since it always tracks UTC.
steady_clock
is like a stopwatch. It has no relationship to any time standard. It just keeps ticking. It may not keep perfect time (no clock does really). But it will never be adjusted, especially not backwards. It is great for timing short bits of code. But since it never gets adjusted, it may drift over time with respect to system_clock
which is adjusted to keep in sync with UTC.
This boils down to the fact that steady_clock
is best for timing short durations. It also typically has nanosecond resolution, though that is not required. And system_clock
is best for timing "long" times where "long" is very fuzzy. But certainly hours or days qualifies as "long", and durations under a second don't. And if you need to relate a timestamp to a human readable time such as a date/time on the civil calendar, system_clock
is the only choice.
high_resolution_clock
is allowed to be a type alias for either steady_clock
or system_clock
, and in practice always is. But some platforms alias to steady_clock
and some to system_clock
. So imho, it is best to just directly choose steady_clock
or system_clock
so that you know what you're getting.
Though not specified, std::time
is typically restricted to a resolution of a second. So it is completely unusable for situations that require subsecond precision. Otherwise std::time
tracks UTC (excluding leap seconds), just like system_clock
.
std::clock
tracks processor time, as opposed to physical time. That is, when your thread is not busy doing something, and the OS has parked it, measurements of std::clock
will not reflect time increasing during that down time. This can be really useful if that is what you need to measure. And it can be very surprising if you use it without realizing that processor time is what you're measuring.
And new for C++20
C++20 adds four more clocks to the <chrono>
library:
utc_clock
is just like system_clock
, except that it counts leap seconds. This is mainly useful when you need to subtract two time_point
s across a leap second insertion point, and you absolutely need to count that inserted leap second (or a fraction thereof).
tai_clock
measures seconds since 1958-01-01 00:00:00 and is offset 10s ahead of UTC at this date. It doesn't have leap seconds, but every time a leap second is inserted into UTC, the calendrical representation of TAI and UTC diverge by another second.
gps_clock
models the GPS time system. It measures seconds since the first Sunday of January, 1980 00:00:00 UTC. Like TAI, every time a leap second is inserted into UTC, the calendrical representation of GPS and UTC diverge by another second. Because of the similarity in the way that GPS and TAI handle UTC leap seconds, the calendrical representation of GPS is always behind that of TAI by 19 seconds.
file_clock
is the clock used by the filesystem
library, and is what produces the chrono::time_point
aliased by std::filesystem::file_time_type
.
One can use a new named cast in C++20 called clock_cast
to convert among the time_point
s of system_clock
, utc_clock
, tai_clock
, gps_clock
and file_clock
. For example:
auto tp = clock_cast<system_clock>(last_write_time("some_path/some_file.xxx"));
The type of tp
is a system_clock
-based time_point
with the same duration
type (precision) as file_time_type
.
std::chrono::system_clock vs std::chrono::high_resolution_clock behavior
The reason it asserts for you is because high_resolution_clock
is allowed to (and often does) have a different epoch than system_clock
.
It is the de facto standard (not specified but portable) that system_clock
's epoch is measuring time since 1970-01-01 00:00:00 UTC, neglecting leap seconds1.
There is no de facto standard for high_resolution_clock
. On gcc high_resolution_clock
is a typedef for system_clock
, and so on gcc platforms, you'll notice perfect synchronization. On VS and libc++ high_resolution_clock
is a typedef for steady_clock
.
For me, the epoch of steady_clock
is whenever the computer booted up.
Here is a video tutorial for <chrono>
. It covers lots of issues, including this one, and is about an hour long.
1 The de facto standard for system_clock
has been made official for C++20.
Objects of type
system_clock
represent wall clock time from the
system-wide realtime clock. Objects of typesys_time<Duration>
measure time since 1970-01-01 00:00:00 UTC excluding leap seconds.
This measure is commonly referred to as Unix time. This measure
facilitates an efficient mapping betweensys_time
and calendar
types ([time.cal]). [ Example:
sys_seconds{sys_days{1970y/January/1}}.time_since_epoch()
is
0s
.
sys_seconds{sys_days{2000y/January/1}}.time_since_epoch()
is946'684'800s
, which is10'957 * 86'400s
.
— end example ]
Significant performance difference of std clock between different machines
I've been able to reproduce the respective measurements on the two machines, thanks to @Imran's comment above. (Posting this answer to close the question, if Imran should post one I'm happy to accept his instead)
It indeed is related to the available clocksource. The XEON, unfortunately, had the notsc
flag in its kernel parameters which is why the tsc
clocksource wasn't available and selected.
Thus for anyone running into this problem:
1. check your clocksource in /sys/devices/system/clocksource/clocksource0/current_clocksource
2. check available clocksources in /sys/devices/system/clocksource/clocksource0/available_clocksource
3. If you can't find tsc, check dmesg | grep tsc
to check you kernel parameters for notsc
Is it correct to use std::chrono::steady_clock time across the system?
Q: Whether the comparison provides me the latest file or not?
Answer:
With reference to link shared above:
"This clock is not related to wall clock time (for example, it can be
time since last reboot), and is most suitable for measuring
intervals."
It does not seem appropriate for time stamping in a file. For example it seems like it could be time since the process started, and different processes would not agree on the current std::chrono::steady_clock
time.
Retrieved from @François Andrieux's comment might help someone here.
Q: Should I use the system_clock instead of steady_clock?
Answer: When we are working on different servers or systems and trying to compare the std::chrono::steady_clock
it will be wrong. Use std::chrono::system_clock instead.
mismatched types 'std::chrono::_V2::steady_clock' and 'std::chrono::_V2::system_clock'
Thanks to the information provided in the comments, I came up with the following solution:
In the header file:
struct Timer
{
std::chrono::time_point< std::chrono::steady_clock > start;
std::chrono::time_point< std::chrono::steady_clock > end;
Timer( );
~Timer( );
};
In the source file:
util::Timer::Timer( )
: start( std::chrono::steady_clock::now( ) )
{
}
util::Timer::~Timer( )
{
end = std::chrono::steady_clock::now( );
std::clog << "\nTimer took " << std::chrono::duration< double, std::milli >( end - start ).count( ) << " ms\n";
}
So in short, I switched from std::chrono::high_resolution_clock::now( )
to std::chrono::steady_clock::now( )
because high_resolution_clock
has different implementations on different compilers according to high_resolution_clock.
On some of them it returns std::chrono::time_point<std::chrono::steady_clock>
and in some others it returns std::chrono::time_point<std::chrono::system_clock>
. And that caused problems for me.
A note from cppreference:
Notes
The high_resolution_clock is not implemented consistently across different standard library implementations, and its use should be avoided. It is often just an alias for std::chrono::steady_clock or std::chrono::system_clock, but which one it is depends on the library or configuration. When it is a system_clock, it is not monotonic (e.g., the time can go backwards). For example, for gcc's libstdc++ it is system_clock, for MSVC it is steady_clock, and for clang's libc++ it depends on configuration.
Generally one should just use std::chrono::steady_clock or std::chrono::system_clock directly instead of std::chrono::high_resolution_clock: use steady_clock for duration measurements, and system_clock for wall-clock time.
Get steady_clock and system_clock at the same time
There is no way to do this perfectly, and there is not even a best way. And the way you have shown is one of the good ways. If you are willing to trade some run time you can gain some better accuracy in a statistical sense by calling now()
more than once on each clock and averaging the results.
For example:
std::pair<std::chrono::steady_clock::time_point,
std::chrono::system_clock::time_point>
combined_now()
{
using namespace std::chrono;
auto u0 = system_clock::now();
auto t0 = steady_clock::now();
auto t1 = steady_clock::now();
auto u1 = system_clock::now();
return {t0 + (t1-t0)/2, u0 + (u1-u0)/2};
}
This is not necessarily better than what you have. But it is another tool in the toolbox to be aware of. As always, use tests with your specific use case to decide what is best for you.
Related Topics
Mpirun: Unrecognized Argument Mca
How to Compile Curlpp on Ubuntu
Lifetime of a Std::Initializer_List Return Value
Hide or Crop Overlapping Text in Qlabel
Lightweight Memory Leak Debugging on Linux
Debugging in Linux Using Core Dumps
While (1) VS. for (;;) Is There a Speed Difference
Find Out If String Ends with Another String in C++
How to Enable C++11 in Qt Creator
How to Find What Version of Libstdc++ Library Is Installed on Your Linux MAChine
C++ Convert from 1 Char to String