How to Calculate Execution Time of a Code Snippet in C++

How to Calculate Execution Time of a Code Snippet in C++

You can use this function I wrote. You call GetTimeMs64(), and it returns the number of milliseconds elapsed since the unix epoch using the system clock - the just like time(NULL), except in milliseconds.

It works on both windows and linux; it is thread safe.

Note that the granularity is 15 ms on windows; on linux it is implementation dependent, but it usually 15 ms as well.

#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif

/* Remove if already defined */
typedef long long int64; typedef unsigned long long uint64;

/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */

uint64 GetTimeMs64()
{
#ifdef _WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;

/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;

uint64 ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */

return ret;
#else
/* Linux */
struct timeval tv;

gettimeofday(&tv, NULL);

uint64 ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;

/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);

return ret;
#endif
}

Execution time of C program

CLOCKS_PER_SEC is a constant which is declared in <time.h>. To get the CPU time used by a task within a C application, use:

clock_t begin = clock();

/* here, do your time-consuming job */

clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;

Note that this returns the time as a floating point type. This can be more precise than a second (e.g. you measure 4.52 seconds). Precision depends on the architecture; on modern systems you easily get 10ms or lower, but on older Windows machines (from the Win98 era) it was closer to 60ms.

clock() is standard C; it works "everywhere". There are system-specific functions, such as getrusage() on Unix-like systems.

Java's System.currentTimeMillis() does not measure the same thing. It is a "wall clock": it can help you measure how much time it took for the program to execute, but it does not tell you how much CPU time was used. On a multitasking systems (i.e. all of them), these can be widely different.

How to get an objective evaluation of the execution time of a C++ code snippet?

Measuring a single call's execution time is pretty useless for judging any performance improvements. There are too many factors that influence the actual execution time of a function. If you are measure timing you should make many calls to the function measure the time and build a statistical average of the measured execution times

void main() {
int64 begin = 0, end = 0;
begin = GetTimeMs64();
for (int i = 0; i < 10000; ++i) {
execute_my_codes_method1();
}
end = GetTimeMs64();
std::cout<<"Average execution time is "<< (end - begin) / 10000 << std::endl;
}

Additionally instead of what's shown above, the presence of having unit tests for your functions up front (using a decent testing framework like e.g. Google Test), will making such quick judgments as you mention a lot quicker and easier.

Not only you can determine how often the test cases should be run (to gather the statistical data for average time calculation), the unit tests can also prove that the desired/existing functionality and input/output consistency wasn't broken by an alternate implementation.

As an extra benefit (as you mentioned difficulties running the two functions in question sequentially), most of those unit test frameworks allow to have a SetUp() and TearDown() method, that are executed before/after running a test case. Thus you can easily provide consistent state of predicate or invariant conditions for each single test case run.


As a further option, instead of measuring to gather the statistical data yourself, you can use profiling tools that work via code instrumentation. A good sample for this is GCC's gprof. I think there's information gathered for how often every underlying function was called and which time the execution took. This data can be analyzed later with the tool, to find potential bottlenecks in your implementations.

Additionally,- if you decide to provide unit tests in future -, you may want to ensure all of your code path's regarding various input data situations are covered well by your test cases. A very good example,for how to do this, is GCC's gcov instrumentation. To analyze the gathered information about code coverage you can use lcov, that visualizes the results quite nicely and comprehensive.

C Program measure execution time for an instruction

#include<time.h> 
main()
{
clock_t t1=clock();
printf("Dummy Statement\n");
clock_t t2=clock();
printf("The time taken is.. %g ", (t2-t1));

Please look at the below liks too.
What’s the correct way to use printf to print a clock_t?

http://www.velocityreviews.com/forums/t454464-c-get-time-in-milliseconds.html

Measuring execution time of a function in C++

It is a very easy-to-use method in C++11. You have to use std::chrono::high_resolution_clock from <chrono> header.

Use it like so:

#include <chrono>

/* Only needed for the sake of this example. */
#include <iostream>
#include <thread>

void long_operation()
{
/* Simulating a long, heavy operation. */

using namespace std::chrono_literals;
std::this_thread::sleep_for(150ms);
}

int main()
{
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::duration;
using std::chrono::milliseconds;

auto t1 = high_resolution_clock::now();
long_operation();
auto t2 = high_resolution_clock::now();

/* Getting number of milliseconds as an integer. */
auto ms_int = duration_cast<milliseconds>(t2 - t1);

/* Getting number of milliseconds as a double. */
duration<double, std::milli> ms_double = t2 - t1;

std::cout << ms_int.count() << "ms\n";
std::cout << ms_double.count() << "ms\n";
return 0;
}

This will measure the duration of the function long_operation.

Possible output:

150ms
150.068ms

Working example: https://godbolt.org/z/oe5cMd

How to know calculate the execution time of an algorithm in c++?

First of all take a look at my reply to this question; it contains a portable (windows/linux) function to get the time in milliseconds.

Next, do something like this:

int64 start_time = GetTimeMs64();
const int NUM_TIMES = 100000; /* Choose this so it takes at the very least half a minute to run */

for (int i = 0; i < NUM_TIMES; ++i) {
/* Code you want to time.. */
}

double milliseconds = (GetTimeMs64() - start_time) / (double)NUM_TIMES;

All done! (Note that I haven't tried to compile it)

How can I find the execution time of a section of my program in C?

You referred to clock() and time() - were you looking for gettimeofday()?
That will fill in a struct timeval, which contains seconds and microseconds.

Of course the actual resolution is up to the hardware.



Related Topics



Leave a reply



Submit