How to Obtain the Number of Cpus/Cores in Linux from the Command Line

How to obtain the number of CPUs/cores in Linux from the command line?

grep -c ^processor /proc/cpuinfo

will count the number of lines starting with "processor" in /proc/cpuinfo

For systems with hyper-threading, you can use

grep ^cpu\\scores /proc/cpuinfo | uniq |  awk '{print $4}'

which should return (for example) 8 (whereas the command above would return 16)

Number of processors/cores in command line

nproc is what you are looking for.

More here : http://www.cyberciti.biz/faq/linux-get-number-of-cpus-core-command/

How to get the number of CPUs in Linux using C?

#include <stdio.h>
#include <sys/sysinfo.h>

int main(int argc, char *argv[])
{
printf("This system has %d processors configured and "
"%d processors available.\n",
get_nprocs_conf(), get_nprocs());
return 0;
}

https://linux.die.net/man/3/get_nprocs

Find out the number of CPU cores used by a linux job

you can use ps -aF or I am guessing you are referring to htop which is a graphical tool.

How to find out the number of CPUs using python

If you're interested into the number of processors available to your current process, you have to check cpuset first. Otherwise (or if cpuset is not in use), multiprocessing.cpu_count() is the way to go in Python 2.6 and newer. The following method falls back to a couple of alternative methods in older versions of Python:

import os
import re
import subprocess


def available_cpu_count():
""" Number of available virtual or physical CPUs on this system, i.e.
user/real as output by time(1) when called with an optimally scaling
userspace-only program"""

# cpuset
# cpuset may restrict the number of *available* processors
try:
m = re.search(r'(?m)^Cpus_allowed:\s*(.*)$',
open('/proc/self/status').read())
if m:
res = bin(int(m.group(1).replace(',', ''), 16)).count('1')
if res > 0:
return res
except IOError:
pass

# Python 2.6+
try:
import multiprocessing
return multiprocessing.cpu_count()
except (ImportError, NotImplementedError):
pass

# https://github.com/giampaolo/psutil
try:
import psutil
return psutil.cpu_count() # psutil.NUM_CPUS on old versions
except (ImportError, AttributeError):
pass

# POSIX
try:
res = int(os.sysconf('SC_NPROCESSORS_ONLN'))

if res > 0:
return res
except (AttributeError, ValueError):
pass

# Windows
try:
res = int(os.environ['NUMBER_OF_PROCESSORS'])

if res > 0:
return res
except (KeyError, ValueError):
pass

# jython
try:
from java.lang import Runtime
runtime = Runtime.getRuntime()
res = runtime.availableProcessors()
if res > 0:
return res
except ImportError:
pass

# BSD
try:
sysctl = subprocess.Popen(['sysctl', '-n', 'hw.ncpu'],
stdout=subprocess.PIPE)
scStdout = sysctl.communicate()[0]
res = int(scStdout)

if res > 0:
return res
except (OSError, ValueError):
pass

# Linux
try:
res = open('/proc/cpuinfo').read().count('processor\t:')

if res > 0:
return res
except IOError:
pass

# Solaris
try:
pseudoDevices = os.listdir('/devices/pseudo/')
res = 0
for pd in pseudoDevices:
if re.match(r'^cpuid@[0-9]+$', pd):
res += 1

if res > 0:
return res
except OSError:
pass

# Other UNIXes (heuristic)
try:
try:
dmesg = open('/var/run/dmesg.boot').read()
except IOError:
dmesgProcess = subprocess.Popen(['dmesg'], stdout=subprocess.PIPE)
dmesg = dmesgProcess.communicate()[0]

res = 0
while '\ncpu' + str(res) + ':' in dmesg:
res += 1

if res > 0:
return res
except OSError:
pass

raise Exception('Can not determine number of CPUs on this system')

How to know number of CPU's and cores avialble for each of them?

If you do not need to find this programmatically, you can do:

grep processor /proc/cpuinfo | wc -l

That will give you the number of cores, including things like hyper threading, available to the kernel. To exclude such virtual cores:

grep 'core id' /proc/cpuinfo | sort | uniq | wc -l

In programs, you will find that the standard libraries, if they have threading support at all, provide means to get the number of hardware threads (cores). In C++, you would use std::thread::hardware_concurrency(), in python there is multiprocessing.cpu_count().

However, depending on the "virtualisation" in place at your provider, the number of cores availlable to the kernel might higher than the number of cores available to your virtual private system. You have to check your contract to see how many cores they guarantee you. If they do not make any statement on that, you have to benchmark, although that probably depends on the time of day (and the load produced by others with whom you share the same hardware).

Find Number of CPUs and Cores per CPU using Command Prompt

Based upon your comments - your path statement has been changed/is incorrect or the path variable is being incorrectly used for another purpose.

Force Linux to schedule processes on CPU cores that share CPU cache

Newer Linux may do this for you: Cluster-Aware Scheduling Lands In Linux 5.16 - there's support for scheduling decisions to be influenced by the fact that some cores share resources.

If you manually pick a CCX, you could give them each the same affinity mask that allows them to schedule on any of the cores in that CCX.

An affinity mask can have multiple bits set.


I don't know of a way to let the kernel decide which CCX, but then schedule both tasks to cores within it. If the parent checks which core it's currently running on, it could set a mask to include all cores in the CCX containing it, assuming you have a way to detect how core #s are grouped, and a function to apply that.

You'd want to be careful that you don't end up leaving some CCXs totally unused if you start multiple processes that each do this, though. Maybe every second, do whatever top or htop do to check per-core utilization, and if so rebalance? (i.e. change the affinity mask of both processes to the cores of a different CCX). Or maybe put this functionality outside the processes being scheduled, so there's one "master control program" that looks at (and possibly modifies) affinity masks for a set of tasks that it should control. (Not all tasks on the system; that would be a waste of work.)

Or if it's looking at everything, it doesn't need to do so much checking of current load average, just count what's scheduled where. (And assume that tasks it doesn't know about can pick any free cores on any CCX, like daemons or the occasional compile job. Or at least compete fairly if all cores are busy with jobs it's managing.)


Obviously this is not helpful for most parent/child processes, only ones that do a lot of communication via shared memory (or maybe pipes, since kernel pipe buffers are effectively shared memory).

It is true that Zen CPUs have varying inter-core latency within / across CCXs, as well as just cache hit effects from sharing L3. https://www.anandtech.com/show/16529/amd-epyc-milan-review/4 did some microbenchmarking on Zen 3 vs. 2-socket Xeon Platinum vs. 2-socket ARM Ampere.



Related Topics



Leave a reply



Submit