How to Get Duration, as Int Milli's and Float Seconds from <Chrono>

How to get duration, as int milli's and float seconds from chrono?

Is this what you're looking for?

#include <chrono>
#include <iostream>

int main()
{
typedef std::chrono::high_resolution_clock Time;
typedef std::chrono::milliseconds ms;
typedef std::chrono::duration<float> fsec;
auto t0 = Time::now();
auto t1 = Time::now();
fsec fs = t1 - t0;
ms d = std::chrono::duration_cast<ms>(fs);
std::cout << fs.count() << "s\n";
std::cout << d.count() << "ms\n";
}

which for me prints out:

6.5e-08s
0ms

Get chrono seconds in float

According to this: http://en.cppreference.com/w/cpp/chrono/duration the default ratio for the second template parameter is 1:1 meaning seconds.

Other values are ratios relative to that. For example the ratio for std::chrono::milliseconds is 1:1000 http://en.cppreference.com/w/cpp/numeric/ratio/ratio

So this statement:

std::chrono::duration<double> elapsed_seconds = end-start;

is equivalent to:

std::chrono::duration<double, std::ratio<1>> elapsed_seconds = end-start;

Which is defined to be seconds from which all other ratios are derived.

Whatever units end - start are defined in get converted to std::ratio<1> as in seconds.

If you wanted the time in milliseconds you could do:

std::chrono::duration<double, std::ratio<1, 1000>> elapsed_milliseconds = end-start;

And end - start should be converted according to the new ratio.

C++ chrono - get duration as float or long long

From the documentation

template<
class Rep,
class Period = std::ratio<1>
> class duration;

Class template std::chrono::duration represents a time interval. It
consists of a count of ticks of type Rep and a tick period, where the
tick period is a compile-time rational constant representing the
number of seconds from one tick to the next.

And:

count returns the count of ticks

So a duration stores a number of ticks of a specified period of time, and count will return that number using the underlying representation type. So if the duration's representation is long long, and the period is std::milli, then .count() will return a long long equal to the number of milliseconds represented by the duration.


In general you should avoid using weak types like float or long long to represent a duration. Instead you should stick with 'rich' types, such as std::chrono::milliseconds or an appropriate specialization of std::chrono::duration. These types aid correct usage and readability, and help prevent mistakes via type checking.

  • Underspecified / overly general:

    – void increase_speed(double);

    – Object obj; … obj.draw();

    – Rectangle(int,int,int,int);

  • Better: – void increase_speed(Speed);

    – Shape& s; … s.draw();

    – Rectangle(Point top_left, Point bottom_right);

    – Rectangle(Point top_left, Box_hw b);


— slide 18 from Bjarne's talk


std::chrono is "a consistent subset of a physical quantities library that handles only units of time and only those units of time with exponents equal to 0 and 1."

If you need to work with quantities of time you should take advantage of this library, or one that provides more complete unit systems, such as boost::units.

There are rare occasions where quantities must be degraded to weakly typed values. For example, when one must use an API that requires such types. Otherwise it should be avoided.

Convert float seconds to chrono::duration

The best way is also the easiest and safest. Safety is a key aspect of using chrono. Safety translates to: Least likely to contain programming errors.

There's two steps for this:

  1. Convert the float to a chrono::duration that is represented by a float and has the period of seconds.
  2. Convert the resultant duration of step 1 to nanoseconds (which is the same thing as duration<int64_t, std::nano>).

This might look like this:

constexpr
auto
durationToDuration(const float time_s)
{
using namespace std::chrono;
using fsec = duration<float>;
return round<nanoseconds>(fsec{time_s});
}

fsec is the resultant type of step 1. It does absolutely no computation, and just changes the type from float to a chrono::duration. Then the chrono engine is used to do the actual computation, changing one duration into another duration.

The round utility is used because floating point types are vulnerable to round-off error. So if a floating point value is close to an integral number of nanoseconds, but not exact, one usually desires that close value.

But std::chrono::round is really a C++17 facility. For C++14, just use one of the free, open-source versions floating around the web (http://howardhinnant.github.io/duration_io/chrono_util.html or https://github.com/HowardHinnant/date/blob/master/include/date/date.h).

How to convert std::chrono::duration to double (seconds)?

Simply do:

std::chrono::duration<double>(d).count()

Or, as a function:

template <class Rep, class Period>
constexpr auto F(const std::chrono::duration<Rep,Period>& d)
{
return std::chrono::duration<double>(d).count();
}

If you need more complex casts that cannot be fulfilled by the std::chrono::duration constructors, use std::chrono::duration_cast.

Chrono - The difference between two points in time in milliseconds?

std::chrono::duration has two template parameters, the second being exactly the unit of measure. You can invoke std::chrono::duration_cast to cast from one duration type to another. Also, there is a predefined duration type for milliseconds: std::chrono::milliseconds. Composing this together:

auto milliseconds = std::chrono::duration_cast<std::chrono::milliseconds>(foo - now);

To get the actual number of milliseconds, use duration::count:

auto ms = milliseconds.count();

Its return type is duration::rep, which for standard duration types like std::chrono::milliseconds is a signed integer of unspecified size.

C++ Print days, hours, minutes, etc. of a chrono::duration

When you cannot use fmtlib or wait for C++20 <format>, you at least want to delay the invocation of cout() on durations as much as possible. Also, let <chrono> handle the computation for you. Both measures improve the terseness of the snippet:

const auto hrs = duration_cast<hours>(sysInactive);
const auto mins = duration_cast<minutes>(sysInactive - hrs);
const auto secs = duration_cast<seconds>(sysInactive - hrs - mins);
const auto ms = duration_cast<milliseconds>(sysInactive - hrs - mins - secs);

And the output:

cout << "System inactive for " << hrs.count() <<
":" << mins.count() <<
":" << secs.count() <<
"." << ms.count() << endl;

Note that you could also define a utility template,

template <class Rep, std::intmax_t num, std::intmax_t denom>
auto chronoBurst(std::chrono::duration<Rep, std::ratio<num, denom>> d)
{
const auto hrs = duration_cast<hours>(d);
const auto mins = duration_cast<minutes>(d - hrs);
const auto secs = duration_cast<seconds>(d - hrs - mins);
const auto ms = duration_cast<milliseconds>(d - hrs - mins - secs);

return std::make_tuple(hrs, mins, secs, ms);
}

that has a nice use case in conjunction with structured bindings:

const auto [hrs, mins, secs, ms] = chronoBurst(sysInactive);

Get an unsigned int milliseconds out of chrono::duration

The name of the type is std::chrono::milliseconds, and it has a member function count() that returns the number of those milliseconds:

bool setTimer(std::chrono::milliseconds duration)
{
unsigned int dwDuration = duration.count();
return std::cout << "dwDuration = " << dwDuration << '\n';
}

online demo: http://coliru.stacked-crooked.com/a/03f29d41e9bd260c

If you want to be ultra-pedantic, the return type of count() is std::chrono::milliseconds::rep

If you want to deal with fractional milliseconds, then the type would be std::chrono::duration<double, std::milli> (and the return type of count() is then double)

How do I convert a std::chrono::time_point to long and back?

std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now();

This is a great place for auto:

auto now = std::chrono::system_clock::now();

Since you want to traffic at millisecond precision, it would be good to go ahead and covert to it in the time_point:

auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now);

now_ms is a time_point, based on system_clock, but with the precision of milliseconds instead of whatever precision your system_clock has.

auto epoch = now_ms.time_since_epoch();

epoch now has type std::chrono::milliseconds. And this next statement becomes essentially a no-op (simply makes a copy and does not make a conversion):

auto value = std::chrono::duration_cast<std::chrono::milliseconds>(epoch);

Here:

long duration = value.count();

In both your and my code, duration holds the number of milliseconds since the epoch of system_clock.

This:

std::chrono::duration<long> dur(duration);

Creates a duration represented with a long, and a precision of seconds. This effectively reinterpret_casts the milliseconds held in value to seconds. It is a logic error. The correct code would look like:

std::chrono::milliseconds dur(duration);

This line:

std::chrono::time_point<std::chrono::system_clock> dt(dur);

creates a time_point based on system_clock, with the capability of holding a precision to the system_clock's native precision (typically finer than milliseconds). However the run-time value will correctly reflect that an integral number of milliseconds are held (assuming my correction on the type of dur).

Even with the correction, this test will (nearly always) fail though:

if (dt != now)

Because dt holds an integral number of milliseconds, but now holds an integral number of ticks finer than a millisecond (e.g. microseconds or nanoseconds). Thus only on the rare chance that system_clock::now() returned an integral number of milliseconds would the test pass.

But you can instead:

if (dt != now_ms)

And you will now get your expected result reliably.

Putting it all together:

int main ()
{
auto now = std::chrono::system_clock::now();
auto now_ms = std::chrono::time_point_cast<std::chrono::milliseconds>(now);

auto value = now_ms.time_since_epoch();
long duration = value.count();

std::chrono::milliseconds dur(duration);

std::chrono::time_point<std::chrono::system_clock> dt(dur);

if (dt != now_ms)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}

Personally I find all the std::chrono overly verbose and so I would code it as:

int main ()
{
using namespace std::chrono;
auto now = system_clock::now();
auto now_ms = time_point_cast<milliseconds>(now);

auto value = now_ms.time_since_epoch();
long duration = value.count();

milliseconds dur(duration);

time_point<system_clock> dt(dur);

if (dt != now_ms)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}

Which will reliably output:

Success.

Finally, I recommend eliminating temporaries to reduce the code converting between time_point and integral type to a minimum. These conversions are dangerous, and so the less code you write manipulating the bare integral type the better:

int main ()
{
using namespace std::chrono;
// Get current time with precision of milliseconds
auto now = time_point_cast<milliseconds>(system_clock::now());
// sys_milliseconds is type time_point<system_clock, milliseconds>
using sys_milliseconds = decltype(now);
// Convert time_point to signed integral type
auto integral_duration = now.time_since_epoch().count();
// Convert signed integral type to time_point
sys_milliseconds dt{milliseconds{integral_duration}};
// test
if (dt != now)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}

The main danger above is not interpreting integral_duration as milliseconds on the way back to a time_point. One possible way to mitigate that risk is to write:

    sys_milliseconds dt{sys_milliseconds::duration{integral_duration}};

This reduces risk down to just making sure you use sys_milliseconds on the way out, and in the two places on the way back in.

And one more example: Let's say you want to convert to and from an integral which represents whatever duration system_clock supports (microseconds, 10th of microseconds or nanoseconds). Then you don't have to worry about specifying milliseconds as above. The code simplifies to:

int main ()
{
using namespace std::chrono;
// Get current time with native precision
auto now = system_clock::now();
// Convert time_point to signed integral type
auto integral_duration = now.time_since_epoch().count();
// Convert signed integral type to time_point
system_clock::time_point dt{system_clock::duration{integral_duration}};
// test
if (dt != now)
std::cout << "Failure." << std::endl;
else
std::cout << "Success." << std::endl;
}

This works, but if you run half the conversion (out to integral) on one platform and the other half (in from integral) on another platform, you run the risk that system_clock::duration will have different precisions for the two conversions.



Related Topics



Leave a reply



Submit