Execution time of C program
CLOCKS_PER_SEC
is a constant which is declared in <time.h>
. To get the CPU time used by a task within a C application, use:
clock_t begin = clock();
/* here, do your time-consuming job */
clock_t end = clock();
double time_spent = (double)(end - begin) / CLOCKS_PER_SEC;
Note that this returns the time as a floating point type. This can be more precise than a second (e.g. you measure 4.52 seconds). Precision depends on the architecture; on modern systems you easily get 10ms or lower, but on older Windows machines (from the Win98 era) it was closer to 60ms.
clock()
is standard C; it works "everywhere". There are system-specific functions, such as getrusage()
on Unix-like systems.
Java's System.currentTimeMillis()
does not measure the same thing. It is a "wall clock": it can help you measure how much time it took for the program to execute, but it does not tell you how much CPU time was used. On a multitasking systems (i.e. all of them), these can be widely different.
How to get execution time of c program?
Contrary to popular belief, the clock()
function retrieves CPU time, not elapsed clock time as the name confusingly may induce people to believe.
Here is the language from the C Standard:
7.27.2.1 The
clock
functionSynopsis
#include <time.h>
clock_t clock(void);
Description
The
clock
function determines the processor time used.Returns
The
clock
function returns the implementation’s best approximation to the processor time used by the program since the beginning of an implementation-defined era related only to the program invocation. To determine the time in seconds, the value returned by the clock function should be divided by the value of the macroCLOCKS_PER_SEC
. If the processor time used is not available, the function returns the value(clock_t)(−1)
. If the value cannot be represented, the function returns an unspecified value.
To retrieve the elapsed time, you should use one of the following:
- the
time()
function with a resolution of 1 second - the
timespec_get()
function which may be more precise, but might not be available on all systems - the
gettimeofday()
system call available on linux systems - the
clock_gettime()
function.
See What specifically are wall-clock-time, user-cpu-time, and system-cpu-time in UNIX? for more information on this subject.
Here is a modified version using gettimeoday()
:
#include <stdio.h>
#include <unistd.h>
#include <sys/time.h>
int main() {
struct timeval start, end;
gettimeofday(&start, NULL);
sleep(3);
gettimeofday(&end, NULL);
double time_taken = end.tv_sec + end.tv_usec / 1e6 -
start.tv_sec - start.tv_usec / 1e6; // in seconds
printf("time program took %f seconds to execute\n", time_taken);
return 0;
}
Output:
time program took 3.005133 seconds to execute
how to find execution time of while-loop not total program
In header file time.h have that function. you can use that.
You can see there is a clock_t variable called start which calls a clock() function.
try this:
clock_t start = clock();
for ( i = 0; i < 100; i++ )
rand();
printf ( "%f\n", ( (double)clock() - start ) / CLOCKS_PER_SEC );
it will give you execution time. for this check your conditioj in while loop. it should "< 100" i think. check this code it will work for you.
How to Calculate Execution Time of a Code Snippet in C++
You can use this function I wrote. You call GetTimeMs64()
, and it returns the number of milliseconds elapsed since the unix epoch using the system clock - the just like time(NULL)
, except in milliseconds.
It works on both windows and linux; it is thread safe.
Note that the granularity is 15 ms on windows; on linux it is implementation dependent, but it usually 15 ms as well.
#ifdef _WIN32
#include <Windows.h>
#else
#include <sys/time.h>
#include <ctime>
#endif
/* Remove if already defined */
typedef long long int64; typedef unsigned long long uint64;
/* Returns the amount of milliseconds elapsed since the UNIX epoch. Works on both
* windows and linux. */
uint64 GetTimeMs64()
{
#ifdef _WIN32
/* Windows */
FILETIME ft;
LARGE_INTEGER li;
/* Get the amount of 100 nano seconds intervals elapsed since January 1, 1601 (UTC) and copy it
* to a LARGE_INTEGER structure. */
GetSystemTimeAsFileTime(&ft);
li.LowPart = ft.dwLowDateTime;
li.HighPart = ft.dwHighDateTime;
uint64 ret = li.QuadPart;
ret -= 116444736000000000LL; /* Convert from file time to UNIX epoch time. */
ret /= 10000; /* From 100 nano seconds (10^-7) to 1 millisecond (10^-3) intervals */
return ret;
#else
/* Linux */
struct timeval tv;
gettimeofday(&tv, NULL);
uint64 ret = tv.tv_usec;
/* Convert from micro seconds (10^-6) to milliseconds (10^-3) */
ret /= 1000;
/* Adds the seconds (10^0) after converting them to milliseconds (10^-3) */
ret += (tv.tv_sec * 1000);
return ret;
#endif
}
Measuring execution time of a function in C++
It is a very easy-to-use method in C++11. You have to use std::chrono::high_resolution_clock
from <chrono>
header.
Use it like so:
#include <chrono>
/* Only needed for the sake of this example. */
#include <iostream>
#include <thread>
void long_operation()
{
/* Simulating a long, heavy operation. */
using namespace std::chrono_literals;
std::this_thread::sleep_for(150ms);
}
int main()
{
using std::chrono::high_resolution_clock;
using std::chrono::duration_cast;
using std::chrono::duration;
using std::chrono::milliseconds;
auto t1 = high_resolution_clock::now();
long_operation();
auto t2 = high_resolution_clock::now();
/* Getting number of milliseconds as an integer. */
auto ms_int = duration_cast<milliseconds>(t2 - t1);
/* Getting number of milliseconds as a double. */
duration<double, std::milli> ms_double = t2 - t1;
std::cout << ms_int.count() << "ms\n";
std::cout << ms_double.count() << "ms\n";
return 0;
}
This will measure the duration of the function long_operation
.
Possible output:
150ms
150.068ms
Working example: https://godbolt.org/z/oe5cMd
Code didn't calculate the right time execution of function in C
Actually, you are iterating loop for a small number of times. Try iterating the loop for large number of times. Eg. iterate for 1e9 times. Then you will get noticeable time period. Modern processors are very fast, such that they have frequency upto 2.9 GHz or more, which mean that they can execute around 2 billion instructions, if available, within a second.
How to calculate time taken to execute C++ program excluding time taken to user input?
A straightforward approach would be to "Freeze time" when user input is required, so instead of creating the end
variable after the input lines, create it before the input lines and restart time calculation again after the input:
double total = 0;
auto begin = chrono::high_resolution_clock::now();
// code that needs time calculation
auto end = chrono::high_resolution_clock::now();
total += chrono::duration_cast<chrono::duration<double>>(end - begin).count();
// your code here that has cin for input
begin = chrono::high_resolution_clock::now();
// code that needs time calculation
end = chrono::high_resolution_clock::now();
total += chrono::duration_cast<chrono::duration<double>>(end - begin).count();
cout << total << " seconds";
Related Topics
Generate a Plane with Triangle Strips
How Are Exceptions Implemented Under the Hood
How to Test Whether Class B Is Derived from Template Family of Classes
Boost C++ Regex - How to Get Multiple Matches
Std::Filesystem' Has Not Been Declared After Including <Experimental/Filesystem>
Boost::Spirit::Qi Parser: Index of Parsed Element
What Is the Most Efficient Thread-Safe C++ Logger
Error: 'I' Does Not Name a Type with Auto
Static Variable Used in a Template Function
Creating a Simple Configuration File and Parser in C++
What Does "-Wall" in "G++ -Wall Test.Cpp -O Test" Do
Why How to Define Structures and Classes Within a Function in C++
Finding Memory Leaks in a C++ Application with Visual Studio