Converting jiffies to milli seconds
As a previous answer said, the rate at which jiffies
increments is fixed.
The standard way of specifying time for a function that accepts jiffies
is using the constant HZ
.
That's the abbreviation for Hertz, or the number of ticks per second. On a system with a timer tick set to 1ms, HZ=1000. Some distributions or architectures may use another number (100 used to be common).
The standard way of specifying a jiffies
count for a function is using HZ
, like this:
schedule_timeout(HZ / 10); /* Timeout after 1/10 second */
In most simple cases, this works fine.
2*HZ /* 2 seconds in jiffies */
HZ /* 1 second in jiffies */
foo * HZ /* foo seconds in jiffies */
HZ/10 /* 100 milliseconds in jiffies */
HZ/100 /* 10 milliseconds in jiffies */
bar*HZ/1000 /* bar milliseconds in jiffies */
Those last two have a bit of a problem, however, as on a system with a 10 ms timer tick, HZ/100
is 1, and the precision starts to suffer. You may get a delay anywhere between 0.0001 and 1.999 timer ticks (0-2 ms, essentially). If you tried to use HZ/200
on a 10ms tick system, the integer division gives you 0 jiffies!
So the rule of thumb is, be very careful using HZ for tiny values (those approaching 1 jiffie).
To convert the other way, you would use:
jiffies / HZ /* jiffies to seconds */
jiffies * 1000 / HZ /* jiffies to milliseconds */
You shouldn't expect anything better than millisecond precision.
convert jiffies to seconds
You divide it by the number you get from sysconf(_SC_CLK_TCK).
However, I think this is probably always 100 under Linux regardless of the actual clock tick, it's always presented to userspace as 100.
See man proc(5).
Converting clock time (real world time) to jiffies and vice versa
[This answer is for the Linux kernel since linux-kernel was tagged in the question.]
mktime64
is nothing to do with jiffies. It converts the date specified by its parameters to the number of seconds (ignoring leap seconds) since 1970-01-01 00:00:00 (Unix time since the epoch if the parameters are for GMT).
The returned time64_t
value can be converted back to year, month, day, hours, minutes, seconds using the time64_to_tm
function in the kernel. It has this prototype:
void time64_to_tm(time64_t totalsecs, int offset, struct tm *result);
The offset
parameter is a local timezone offset in seconds (number of seconds east of GMT). It should be set to 0 to undo the conversion done by mktime64
.
Note that the tm_year
member is set to the calculated year minus 1900 and the tm_mon
member is set to the calculated month minus 1, so you could implement an unmktime64
function as follows:
void unmktime64(time64_t totalsecs,
int *year, unsigned int *month, unsigned int *day,
unsigned int *hour, unsigned int *minute, unsigned int *second)
{
struct tm tm;
time64_to_tm(totalsecs, 0, &tm);
*year = tm.tm_year + 1900;
*month = tm.tm_mon + 1;
*day = tm.tm_mday;
*hour = tm.tm_hour;
*minute = tm.tm_min;
*second = tm.tm_sec;
}
Conversion of msec to jiffies
It seems your system HZ value is set to 100.
If you wish to suspend execution for a period of time in a resolution lower then the system HZ, you need to use high resolution timers (which use nsec resolution, not jiffies) supported in your board and enabled in the kernel. See here for the interface of how to use them: http://lwn.net/Articles/167897/
So, either change the system HZ to 1000 and get a jiffie resolution of 1 msec or use a high resolution timer.
what context is jiffies counter updated?
jiffies is incremented when timer interrupt is hit. Timer interrupt is hit by system timer. It is not updated by softirq kthread.
In x86, system timer is implemented via programmable interrupt timer (PIT). PPC implements it via decrementer.
From the description of your thread, it seems your thread is locking up the cpu, hence watchdog hit is expected based on its timeout. In most systems, jiffies is 10ms; however you can check its value by checking value of HZ: HZ value will give number of timer interrupts in a second, hence there are HZ jiffies in a second.
In your case, whenever you release the CPU, watchdog thread gets a chance to run and check the current jiffies and then it compares with the jiffies value stored when it was last run: if it finds the difference greater than or equal to watchdog timeout, it hits and resets the system if configured.
Related Topics
Memory Limit to a 32-Bit Process Running on a 64-Bit Linux Os
How Does Bash Deal with Nested Quotes
How to Find the Java Sdk in Linux After Installing It
Start Script After Another One (Already Running) Finishes
Do I Need -D_Reentrant with -Pthreads
How to Set Process Id in Linux for a Specific Program
Changing Environment Variable of a Running Process
How to Recall the Argument of the Previous Bash Command
Get a Browser Rendered HTML+Javascript
"Zero Copy Networking" VS "Kernel Bypass"
How to Extract Only the Raw Contents of an Elf Section
How to Monitor the Thread Count of a Process on Linux
Are There Standards for Linux Command Line Switches and Arguments
A General Linux File Permissions Question: Apache and Wordpress
What Is the Purpose of the "-I" and "-T" Options for the "Docker Exec" Command