Difference Between Steady_Clock VS System_Clock

Difference between steady_clock vs system_clock?

Answering questions in reverse order:

What is the difference between steady_clock vs system_clock in layman
terms.

If you're holding a system_clock in your hand, you would call it a watch, and it would tell you what time it is.

If you're holding a steady_clock in your hand, you would call it a stopwatch, and it would tell you how fast someone ran a lap, but it would not tell you what time it is.

If you had to, you could time someone running a lap with your watch. But if your watch (like mine) periodically talked to another machine (such as the atomic clock in Boulder CO) to correct itself to the current time, it might make minor mistakes in timing that lap. The stopwatch won't make that mistake, but it also can't tell you what the correct current time is.

Does my above code look right?

No. And even if it gave you reasonable answers, I would not say it is right. Don't feel bad, this is a beginner mistake that lots of people make with the <chrono> library.

There is a simple rule I follow with the <chrono> library. The rule is actually not completely correct (thus it is a guideline). But it is close enough to correct to be a guideline that is nearly always followed:

Don't use count().

And a corollary:

Don't use time_since_epoch().

The <chrono> library is designed around a type-safe system meant to protect you from units conversions mistakes. If you accidentally attempt an unsafe conversion, the error is caught at compile time (as opposed to it being a run time error).

The member functions count() and time_since_epoch() are "escape hatches" out of this type-safe system ... to be used only in cases of emergency. Such emergencies arise when (for example) the committee neglects to give you all the tools you need to get the job done (such as I/O) for the <chrono> types, or such as the need to interface with some other timing API via integers.

Review your code and other's for use of count() and time_since_epoch() and scrutinize each use of these functions: Is there any way the code could be rewritten to eliminate their use?

Reviewing the first line of your code:

uint64_t now = duration_cast<milliseconds>(steady_clock::now().time_since_epoch()).count();

now is a time_point (from steady_clock). It units are milliseconds, but at this time I'm not convinced that the units are important. What is important is that now is a time_point retrieved from steady_clock:

auto now = steady_clock::now();

Your second line is more complicated:

bool is_old = (120 * 1000 < (now - data_holder->getTimestamp()));

Let's start with data_holder->getTimestamp(): If you can modify getTimestamp(), you should modify it to return a time_point instead of a uint64_t. To do so, you will have to know the correct units (which you do -- milliseconds), and you will have to know the correct epoch. The epoch is the time point against which your milliseconds are measured from.

In this case 1437520382241ms is about 45.6 years. Assuming this is a recent time stamp, 45.6 years ago was very close to 1970-01-01. As it turns out, every implementation of system_clock() uses 1970-01-01 as its epoch (though each implementation counts different units from this epoch).

So either modify getTimestamp() to return a time_point<system_clock, milliseconds>, or wrap the return of getTimestamp() with time_point<system_clock, milliseconds>:

auto dh_ts = system_clock::time_point{milliseconds{data_holder->getTimestamp()}};

Now your second line is down to:

bool is_old = (120 * 1000 < (now - dh_ts));

Another good guideline:

If you see conversion factors in your <chrono> code, you're doing it wrong. <chrono> lives for doing the conversions for you.

bool is_old = (minutes{2} < (now - dh_ts));

This next step is stylistic, but now your code is simple enough to get rid of your excess parentheses if that is something that appeals to you:

bool is_old = minutes{2} < now - dh_ts;

If you were able to modify the getTimestamp() to return a type-safe value this code could also look like:

bool is_old = minutes{2} < now - data_holder->getTimestamp();

Alas, either way, this still does not compile! The error message should state something along the lines that that there is no valid operator-() between now and dh_ts.

This is the type-safety system coming in to save you from run time errors!

The problem is that time_points from system_clock can't be subtracted from time_points from steady_clock (because the two have different epochs). So you have to switch to:

auto now = system_clock::now();

Putting it all together:

#include <chrono>
#include <cstdint>
#include <memory>

struct DataHolder
{
std::chrono::system_clock::time_point
getTimestamp()
{
using namespace std::chrono;
return system_clock::time_point{milliseconds{1437520382241}};
}
};

int
main()
{
using namespace std;
using namespace std::chrono;
auto data_holder = std::unique_ptr<DataHolder>(new DataHolder);

auto now = system_clock::now();
bool is_old = minutes{2} < now - data_holder->getTimestamp();
}

And in C++14 that last line can be made a little more concise:

    bool is_old = 2min < now - data_holder->getTimestamp();

In summary:

  • Refuse to use count() (except for I/O).
  • Refuse to use time_since_epoch() (except for I/O).
  • Refuse to use conversion factors (such as 1000).
  • Argue with it until it compiles.

If you succeed in the above four points, you will most likely not experience any run time errors (but you will get your fair share of compile time errors).

Difference between std::system_clock and std::steady_clock?

From N3376:

20.11.7.1 [time.clock.system]/1:

Objects of class system_clock represent wall clock time from the system-wide realtime clock.

20.11.7.2 [time.clock.steady]/1:

Objects of class steady_clock represent clocks for which values of time_point never decrease as physical time advances and for which values of time_point advance at a steady rate relative to real time. That is, the clock may not be adjusted.

20.11.7.3 [time.clock.hires]/1:

Objects of class high_resolution_clock represent clocks with the shortest tick period. high_resolution_clock may be a synonym for system_clock or steady_clock.

For instance, the system wide clock might be affected by something like daylight savings time, at which point the actual time listed at some point in the future can actually be a time in the past. (E.g. in the US, in the fall time moves back one hour, so the same hour is experienced "twice") However, steady_clock is not allowed to be affected by such things.

Another way of thinking about "steady" in this case is in the requirements defined in the table of 20.11.3 [time.clock.req]/2:

In Table 59 C1 and C2 denote clock types. t1 and t2 are values returned by C1::now() where the call returning t1 happens before the call returning t2 and both of these calls occur before C1::time_point::max(). [ Note: this means C1 did not wrap around between t1 and t2. —end note ]

Expression: C1::is_steady
Returns: const bool
Operational Semantics: true if t1 <= t2 is always true and the time between clock ticks is constant, otherwise false.

That's all the standard has on their differences.

If you want to do benchmarking, your best bet is probably going to be std::high_resolution_clock, because it is likely that your platform uses a high resolution timer (e.g. QueryPerformanceCounter on Windows) for this clock. However, if you're benchmarking, you should really consider using platform specific timers for your benchmark, because different platforms handle this differently. For instance, some platforms might give you some means of determining the actual number of clock ticks the program required (independent of other processes running on the same CPU). Better yet, get your hands on a real profiler and use that.

What are the pros & cons of the different C++ clocks for logging time stamps?

system_clock is a clock that keeps time with UTC (excluding leap seconds). Every once in a while (maybe several times a day), it gets adjusted by small amounts, to keep it aligned with the correct time. This is often done with a network service such as NTP. These adjustments are typically on the order of microseconds, but can be either forward or backwards in time. It is actually possible (though not likely nor common) for timestamps from this clock to go backwards by tiny amounts. Unless abused by an administrator, system_clock does not jump by gross amounts, say due to daylight saving, or changing the computer's local time zone, since it always tracks UTC.

steady_clock is like a stopwatch. It has no relationship to any time standard. It just keeps ticking. It may not keep perfect time (no clock does really). But it will never be adjusted, especially not backwards. It is great for timing short bits of code. But since it never gets adjusted, it may drift over time with respect to system_clock which is adjusted to keep in sync with UTC.

This boils down to the fact that steady_clock is best for timing short durations. It also typically has nanosecond resolution, though that is not required. And system_clock is best for timing "long" times where "long" is very fuzzy. But certainly hours or days qualifies as "long", and durations under a second don't. And if you need to relate a timestamp to a human readable time such as a date/time on the civil calendar, system_clock is the only choice.

high_resolution_clock is allowed to be a type alias for either steady_clock or system_clock, and in practice always is. But some platforms alias to steady_clock and some to system_clock. So imho, it is best to just directly choose steady_clock or system_clock so that you know what you're getting.

Though not specified, std::time is typically restricted to a resolution of a second. So it is completely unusable for situations that require subsecond precision. Otherwise std::time tracks UTC (excluding leap seconds), just like system_clock.

std::clock tracks processor time, as opposed to physical time. That is, when your thread is not busy doing something, and the OS has parked it, measurements of std::clock will not reflect time increasing during that down time. This can be really useful if that is what you need to measure. And it can be very surprising if you use it without realizing that processor time is what you're measuring.

And new for C++20

C++20 adds four more clocks to the <chrono> library:

utc_clock is just like system_clock, except that it counts leap seconds. This is mainly useful when you need to subtract two time_points across a leap second insertion point, and you absolutely need to count that inserted leap second (or a fraction thereof).

tai_clock measures seconds since 1958-01-01 00:00:00 and is offset 10s ahead of UTC at this date. It doesn't have leap seconds, but every time a leap second is inserted into UTC, the calendrical representation of TAI and UTC diverge by another second.

gps_clock models the GPS time system. It measures seconds since the first Sunday of January, 1980 00:00:00 UTC. Like TAI, every time a leap second is inserted into UTC, the calendrical representation of GPS and UTC diverge by another second. Because of the similarity in the way that GPS and TAI handle UTC leap seconds, the calendrical representation of GPS is always behind that of TAI by 19 seconds.

file_clock is the clock used by the filesystem library, and is what produces the chrono::time_point aliased by std::filesystem::file_time_type.

One can use a new named cast in C++20 called clock_cast to convert among the time_points of system_clock, utc_clock, tai_clock, gps_clock and file_clock. For example:

auto tp = clock_cast<system_clock>(last_write_time("some_path/some_file.xxx"));

The type of tp is a system_clock-based time_point with the same duration type (precision) as file_time_type.

std::chrono::system_clock vs std::chrono::high_resolution_clock behavior

The reason it asserts for you is because high_resolution_clock is allowed to (and often does) have a different epoch than system_clock.

It is the de facto standard (not specified but portable) that system_clock's epoch is measuring time since 1970-01-01 00:00:00 UTC, neglecting leap seconds1.

There is no de facto standard for high_resolution_clock. On gcc high_resolution_clock is a typedef for system_clock, and so on gcc platforms, you'll notice perfect synchronization. On VS and libc++ high_resolution_clock is a typedef for steady_clock.

For me, the epoch of steady_clock is whenever the computer booted up.

Here is a video tutorial for <chrono>. It covers lots of issues, including this one, and is about an hour long.


1 The de facto standard for system_clock has been made official for C++20.

Objects of type system_­clock represent wall clock time from the
system-wide realtime clock. Objects of type sys_­time<Duration>
measure time since 1970-01-01 00:00:00 UTC excluding leap seconds.
This measure is commonly referred to as Unix time. This measure
facilitates an efficient mapping between sys_­time and calendar
types ([time.cal]). [ Example:
sys_­seconds{sys_­days{1970y/January/1}}.time_­since_­epoch() is
0s.

sys_­seconds{sys_­days{2000y/January/1}}.time_­since_­epoch()
is 946'684'800s, which is 10'957 * 86'400s.
— end example ]

Comparison of `std::chrono` clocks with `boost::xtime`

How do the C++11 std::chrono clocks steady_clock and high_resolution_clock compare with boost::xtime::xtime_get() in terms of quirks and general properties on various platforms?

Any timing library can only deliver what the underlying OS/hardware combination can deliver -- full stop.

Even if the library API promises nanosecond resolution, that doesn't mean that the underlying OS/hardware can deliver that precision. So ultimately the timing API can not improve upon the quality of the platform.

boost::xtime is basically what was standardized by C (and subsequently C++) as timespec. This is a {second, nanosecond} pair which is used both as a point in time, and as time duration, depending on the function it is used with in the standard C headers. Though a quick survey of boost headers appears to use xtime only as a time point (I could've missed something).

timespec has a long history of existing use, especially in POSIX systems. It has been in existence on POSIX systems much longer than std::chrono, which was designed in 2008, and standardized in C++11 (2011).

The range of timespec (xtime) is typically larger than the age of the universe. Though on systems that can not supply a 64 bit integral type, the range of timespec will be significantly smaller: +/-68 years, typically centered around 1970 when it is used as a time point.

As mentioned above, timespec advertises nanosecond precision on all platforms, but will only deliver the precision that the underlying platform can supply.

chrono supplies separate types for time points and durations. This helps catch errors at compile time. For example if you add two time points together, it doesn't compile. 9am today + 7am today is nonsensical. However if you subtract two time points, that makes perfect sense and returns a separate type: a duration. 9am today - 7am today is 2 hours.

chrono supplies multiple types for both durations and time points which can differ in both precision and representation. The "built-in" durations are nanoseconds, microseconds, milliseconds, seconds, minutes and hours, each represented with a signed integral type (that list is expanded in the C++20 spec). But you can create your own duration types with your own precision and representation (e.g. floating point, or a safe-int library).

The implementor of chrono for any given platform can advertise the precision of the platforms "now()" function. I.e. it doesn't have to always be nanoseconds, it might be microseconds, or some other unit. The vendor isn't required to be honest, but they usually are. Clients can query the return type of now() for its precision, programmatically, at compile time (this is C++ after all).

The chrono data structures are {count of units}, as opposed to the xtime {seconds, nanoseconds} data structure. For chrono this is true of both durations and time points, even though these are distinct types.

The {count of units} layout has several advantages over the {seconds, nanoseconds} layout:

  • There is an opportunity to have a smaller sizeof. system_clock::time_point is typically 64 bits while xtime is typically 128 bits. This does give xtime a superior range. However the chrono library can also be used with 128 bit integral types which will subsequently have a larger range than xtime's.

  • Clients can make the size/range tradeoff with chrono. xtime clients get what they get.

  • Arithmetic is faster/more-efficient and easier to program with the {count} data structure than with {seconds, nanoseconds}. This leads to code that is smaller, faster, and generally more bug free (negative values represented with {seconds, nanoseconds} are an ongoing horror story).

  • For a given sizeof and precision, one can always get a larger range with a {count} data structure than a multi-field data structure such as {seconds, nanoseconds}.

The standard does not guarantee that high_resolution_clock be steady (it explicitly mentions it may be an alias to system_clock), so that's one pitfall to look out for.

In practice high_resolution_clock is always a type alias for either steady_clock or system_clock. Which, depends on platform. My advice is to just use steady_clock or system_clock so you know what you're dealing with.

Resolution: The C++11 standard does not seeem to guarantee any resolution; what are the "real-life" resolutions of these clocks?

The advertised resolutions are:

libc++/llvm:

system_clock
rep is long long : 64 bits
period is 1/1,000,000
is_steady is 0

high_resolution_clock
rep is long long : 64 bits
period is 1/1,000,000,000
is_steady is 1

steady_clock
rep is long long : 64 bits
period is 1/1,000,000,000
is_steady is 1

high_resolution_clock is the same type as steady_clock

libstdc++/gcc:

system_clock
rep is long : 64 bits
period is 1/1,000,000,000
is_steady is 0

high_resolution_clock
rep is long : 64 bits
period is 1/1,000,000,000
is_steady is 0

steady_clock
rep is long : 64 bits
period is 1/1,000,000,000
is_steady is 1

high_resolution_clock is the same type as system_clock

VS-2013:

system_clock
rep is __int64 : 64 bits
period is 1/10,000,000
is_steady is 0

high_resolution_clock
rep is __int64 : 64 bits
period is 1/1,000,000,000
is_steady is 1

steady_clock
rep is __int64 : 64 bits
period is 1/1,000,000,000
is_steady is 1

high_resolution_clock is the same type as steady_clock

Because of my opening remarks, the "real-life" resolutions are very likely to be identical to xtime's for any given platform.

Can the C++11 standard clocks cope with durations in the order of days, maybe even weeks on all known platforms?

Yes. Even months and years.

The first duration limit you will hit is when dealing with nanosecond resolution. chrono guarantees this will have at least a 64 bit signed integral representation giving you +-292 years of range. When talking about system_clock, this range will be centered on 1970.

Any other known issues or surprising quirks

When operating at or near range limits, the chrono library can easily and silently overflow. For example if you compare microseconds::max() with nanoseconds::max(), you will experience an overflow and get an indeterminate result. This happens because the comparison operator will first convert the microseconds to nanoseconds before doing the comparison, and that conversion overflows.

Steer well clear of the duration and time_point range limits. If you have to deal with them, and are not sure how, look on Stackoverflow for answers. Ask a question specific to your concern if that search is not satisfactory.

Get steady_clock and system_clock at the same time

There is no way to do this perfectly, and there is not even a best way. And the way you have shown is one of the good ways. If you are willing to trade some run time you can gain some better accuracy in a statistical sense by calling now() more than once on each clock and averaging the results.

For example:

std::pair<std::chrono::steady_clock::time_point,
std::chrono::system_clock::time_point>
combined_now()
{
using namespace std::chrono;
auto u0 = system_clock::now();
auto t0 = steady_clock::now();
auto t1 = steady_clock::now();
auto u1 = system_clock::now();
return {t0 + (t1-t0)/2, u0 + (u1-u0)/2};
}

This is not necessarily better than what you have. But it is another tool in the toolbox to be aware of. As always, use tests with your specific use case to decide what is best for you.

mismatched types 'std::chrono::_V2::steady_clock' and 'std::chrono::_V2::system_clock'

Thanks to the information provided in the comments, I came up with the following solution:

In the header file:

struct Timer
{
std::chrono::time_point< std::chrono::steady_clock > start;
std::chrono::time_point< std::chrono::steady_clock > end;

Timer( );
~Timer( );
};

In the source file:

util::Timer::Timer( )
: start( std::chrono::steady_clock::now( ) )
{
}

util::Timer::~Timer( )
{
end = std::chrono::steady_clock::now( );
std::clog << "\nTimer took " << std::chrono::duration< double, std::milli >( end - start ).count( ) << " ms\n";
}

So in short, I switched from std::chrono::high_resolution_clock::now( ) to std::chrono::steady_clock::now( ) because high_resolution_clock has different implementations on different compilers according to high_resolution_clock.
On some of them it returns std::chrono::time_point<std::chrono::steady_clock> and in some others it returns std::chrono::time_point<std::chrono::system_clock>. And that caused problems for me.

A note from cppreference:

Notes

The high_resolution_clock is not implemented consistently across different standard library implementations, and its use should be avoided. It is often just an alias for std::chrono::steady_clock or std::chrono::system_clock, but which one it is depends on the library or configuration. When it is a system_clock, it is not monotonic (e.g., the time can go backwards). For example, for gcc's libstdc++ it is system_clock, for MSVC it is steady_clock, and for clang's libc++ it depends on configuration.

Generally one should just use std::chrono::steady_clock or std::chrono::system_clock directly instead of std::chrono::high_resolution_clock: use steady_clock for duration measurements, and system_clock for wall-clock time.

Converting steady_clock::time_point to time_t

Assuming you need the steady behavior for internal calculations, and not for display, here's a function you can use to convert to time_t for display.

using std::chrono::steady_clock;
using std::chrono::system_clock;

time_t steady_clock_to_time_t( steady_clock::time_point t )
{
return system_clock::to_time_t(system_clock::now()
+ duration_cast<system_clock::duration>(t - steady_clock::now()));
}

If you need steady behavior for logging, you'd want to get one ( system_clock::now(), steady_clock::now() ) pair at startup and use that forever after.



Related Topics



Leave a reply



Submit