Why Does "While(True)" Without "Thread.Sleep" Cause 100% CPU Usage on Linux But Not on Windows

Why does “while(true)” without “Thread.sleep” cause 100% CPU usage on Linux but not on Windows?

By default, top on Linux runs in so-called IRIX mode, while the Windows Task Manager does not. Let's say you have 4 cores:

  • With IRIX mode on, 1 fully utilized core is 100% and 4 cores are 400%.

  • With IRIX mode off, 1 fully utilized core is 25% and 4 cores are 100%.

This means that by default, top on Linux will show an infinite loop as ~100% and Windows will show it as ~25%, and it means exactly the same thing.

You can toggle IRIX mode while top is running with Shift+i. This will make the numbers match up.

Why does this_thread::sleep_for not reduce CPU usage of while loop

I figured out the reason: During the logic of the code ("// do stuff") there was a continue statement. The continue statement caused the thread to skip over the sleep statement causing it to loop continuously. I moved the thread_sleep to the top of the while loop and the CPU usage went from 99% to 0.1%

Why does Thread.Sleep( ... ) avoids CPU using 100% of a heavy execution from a multithread windows service based application?

When looking at the CPU usage in task manager what you're actually looking at is the percentage of non-noop commands over a given interval of time. A given core of a CPU can't be 34% on. Either it's doing something, or it's not, but it can switch back and forth literally billions of times per second. By inserting Sleep commands the result is that the CPU is spending a lot more time doing nothing, even though it has work that it could be doing. When the time spent nothing is averaged together with the time spent doing something the result is less than 100% usage. If you slept for longer periods of time, but did so less often, you'd see it jumping between 0 and 100% (which is what's really happening, you just can't visualize it currently).

Why will while true use 100% of CPU resources?

If your CPU usage is not at 100%, then the process can use as much as it wants (up to 100%), until other processes are requesting the use of the resource. The process scheduler attempts to maximize CPU usage, and never leave a process starved of CPU time if there are no other processes that need it.

So your while loop will use 100% of available idle CPU resources, and will only begin to use less when other CPU intensive processes are started up. (if you're using Linux/Unix, you could observe this with top by starting up your while loop, then starting another CPU intensive process and watch the % CPU drop for the process with the loop).

Why did switching to from an infinite loop to TimerTask cause this drop in CPU usage?

Three possibilities I can think of:

  • You have a huge number of threads doing this, and they're context switching all the time. Using a timer will mean there's only one thread instead. On the other hand, that means you will only get one task executing at a time.
  • You have a continue; statement somewhere in your loop before the sleep, so even if the main body of work of the loop isn't executing very frequently, something is. It's hard to say without seeing some more concrete code though.
  • You have a broken JVM/OS combination. This seems pretty unlikely, admittedly.

A simple loop just executing Thread.sleep(1000) repeatedly should be very cheap - and that should be easy for you to verify, too.

Java Memory Leak (Loop Traversing)

you can use visualvm to monitor your application while it is running.

Create a memory dump and use Eclipse Memory Analyzer to study your dump

You should be able to get more details with these tools



Related Topics



Leave a reply



Submit