Why Is Clocks_Per_Sec Not the Actual Number of Clocks Per Second

Why is CLOCKS_PER_SEC not the actual number of clocks per second?

clock returns the amount of time spent in your program. There are 1,000,000 clock ticks per second total*. It appears that your program consumed 60% of them.

Something else used the other 40%.

*Okay, there are virtually 1,000,000 clock ticks per second. The actual number is normalized so your program perceives 1,000,000 ticks.

std::clock() and CLOCKS_PER_SEC

Actually it is a combination of what has been posted. Basically as your program is running in a tight loop, CPU time should increase just as fast as wall clock time.

But since your program is writing to stdout and the terminal has a limited buffer space, your program will block whenever that buffer is full until the terminal had enough time to print more of the generated output.

This of course is much more expensive CPU wise than generating the strings from the clock values, therefore most of the CPU time will be spent in the terminal and graphics driver. It seems like your system takes about 14 times the CPU power to output that timestamps than generating the strings to write.

C: Why CLOCKS_PER_SEC is printing 1000 when my processor speed is 3.10GHz

Clock ticks are units of time of a constant but system-specific length, as those returned by function clock.

It has nothing to do with the processor speed.

Redefining CLOCKS_PER_SEC to a higher number in Windows 10

Short answer : no.

Long answer : No but you can use the QueryPerformanceCounter function, heres an example off of MSDN :

LARGE_INTEGER StartingTime, EndingTime, ElapsedMicroseconds;
LARGE_INTEGER Frequency;

QueryPerformanceFrequency(&Frequency);
QueryPerformanceCounter(&StartingTime);

// Activity to be timed

QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;

//
// We now have the elapsed number of ticks, along with the
// number of ticks-per-second. We use these values
// to convert to the number of elapsed microseconds.
// To guard against loss-of-precision, we convert
// to microseconds *before* dividing by ticks-per-second.
//

ElapsedMicroseconds.QuadPart *= 1000000;
ElapsedMicroseconds.QuadPart /= Frequency.QuadPart;

That way, you can even measure nanoseconds but beware : at that precision level, even the tick count can drift and jitter so you might never receive a perfectly accurate result. If you want perfect precision i guess you will be forced to use an RTOS on appropriate, specialized hardware which is shielded against soft errors, for example

Is the CLOCKS_PER_SEC value wrong inside a virtual machine

This is expected.

The clock() function does not return the wall time (the time that real clocks on the wall display). It returns the amount of CPU time used by your program. If your program is not consuming every possible scheduler slice, then it will increase slower than wall time, if your program consumes slices on multiple cores at the same time it can increase faster.

So if you call clock(), and then sleep(5), and then call clock() again, you'll find that clock() has barely increased at all. Even though sleep(5) waits for 5 real seconds, it doesn't consume any CPU, and CPU usage is what clock() measures.

If you want to measure wall clock time you will want clock_gettime() (or the older version gettimeofday()). You can use CLOCK_REALTIME if you want to know the civil time (e.g. "it's 3:36 PM") or CLOCK_MONOTONIC if you want to measure time intervals. In this case, you probably want CLOCK_MONOTONIC.

#include <stdio.h>
#include <time.h>
int main() {
struct timespec start, now;
clock_gettime(CLOCK_MONOTONIC, &start);
while (1) {
clock_gettime(CLOCK_MONOTONIC, &now);
printf("Elapsed: %f\n",
(now.tv_sec - start.tv_sec) +
1e-9 * (now.tv_nsec - start.tv_nsec));
}
}

The usual proscriptions against using busy-loops apply here.

CLOCKS_PER_SEC in C language found the time.h library

Does CLOCKS_PER_SEC varies from system to system or is it constant for an operating system or is it dependent on the processor of that particular system?

CLOCKS_PER_SEC is ultimately determined by the compiler and its standard library implementation, not the OS. Although the machine, OS and other factors contribute to what a compiler provides.

help me explain the output of my code...is it right?

No. printf("\nTotal time:%u",(double)(end - start)/CLOCKS_PER_SEC); is using "%u" to print a double. @Felix Palmen

CLOCKS_PER_SEC is not necessarily an unsigned.

clock_t is not necessarily an int. Ref

Using the wrong printf() specifiers renders the output uninformative.

Tip: enable all compiler warnings.

Cast to a wide type and use a matching print specifier.

clock_t start
// printf("Starting Time:%u\n",start);
printf("Starting Time:%g\n", (double) start);

// printf("CLOCKS_PER_SEC:%u",CLOCKS_PER_SEC);
printf("CLOCKS_PER_SEC:%g\n", (double) CLOCKS_PER_SEC);

// printf("\nTotal time:%u",(double)(end - start)/CLOCKS_PER_SEC);
printf("Total time:%g\n",(double)(end - start)/CLOCKS_PER_SEC);

Or even consider long double.

long double t = (long double)(end - start)/CLOCKS_PER_SEC;
printf("Total time:%Lg\n", t);

Can someone give me an explanation of this function?

It's just a busy-wait loop, which is a very nasty way of implementing a delay, because it pegs the CPU at 100% while doing nothing. Use sleep() instead:

#include <unistd.h>

void wait(int seconds)
{
sleep(seconds);
}

Also note that the code given in the question is buggy:

while (((end-start) / CLOCKS_PER_SEC) = !seconds)

should be:

while (((end-start) / CLOCKS_PER_SEC) != seconds)

or better still:

while (((end-start) / CLOCKS_PER_SEC) < seconds)

(but as mentioned above, you shouldn't even be using this code anyway).



Related Topics



Leave a reply



Submit