How to Get Current Cpu and Ram Usage in Python

How to get current CPU and RAM usage in Python?

The psutil library gives you information about CPU, RAM, etc., on a variety of platforms:

psutil is a module providing an interface for retrieving information on running processes and system utilization (CPU, memory) in a portable way by using Python, implementing many functionalities offered by tools like ps, top and Windows task manager.

It currently supports Linux, Windows, OSX, Sun Solaris, FreeBSD, OpenBSD and NetBSD, both 32-bit and 64-bit architectures, with Python versions from 2.6 to 3.5 (users of Python 2.4 and 2.5 may use 2.1.3 version).


Some examples:

#!/usr/bin/env python
import psutil
# gives a single float value
psutil.cpu_percent()
# gives an object with many fields
psutil.virtual_memory()
# you can convert that object to a dictionary
dict(psutil.virtual_memory()._asdict())
# you can have the percentage of used RAM
psutil.virtual_memory().percent
79.2
# you can calculate percentage of available memory
psutil.virtual_memory().available * 100 / psutil.virtual_memory().total
20.8

Here's other documentation that provides more concepts and interest concepts:

  • https://psutil.readthedocs.io/en/latest/

Getting total memory and cpu usage for one python instance

psutil is a good recommendation to collect that type of information. If you incorporate this code into your existing keras code, you can collect information about the cpu usage of your process at the time the cpu_times() method is called

import psutil

process = psutil.Process()
print(process.cpu_times())

The meaning of the value returned by cpu_times() is explained here. It is cumulative, so if you want to know how much CPU time your keras code used altogether, just run it before you exit the python script.

To get the memory usage information for your process, at the particular time you make the call to memory_info() you can run this on the same process object we declared before

print(process.memory_info())

The exact meaning of the cpu and memory results depend on what platform you're using. The memory info structure is explained here

A more comprehensive example shows how you could use the Advanced Python Scheduler to take cpu and memory measurements in the background as you run your keras training

import psutil

import time
import os

from apscheduler.schedulers.background import BackgroundScheduler

process = psutil.Process()

def get_info():
print(process.cpu_times(), process.memory_info())

if __name__ == '__main__':
scheduler = BackgroundScheduler()
scheduler.add_job(get_info, 'interval', seconds=3)
scheduler.start()

# run the code you want to measure here
# replace this nonsense loop
now = time.time()
finish = now + 60

while time.time() < finish:
print("Some progress message: {}".format(time.time()))
time.sleep(10)

How to measure execution stats of a Python script CPU usage, RAM usage, disk usage etc?

So using psutil i made this helper metrics class you can see in this gist

Proper way to show server's RAM, CPU and GPU usage on react app

Explanation & Tips:

I know EXACTLY what you're describing. I also made a mobile app using Flutter and Python. I have been trying to get multiple servers to host the API instead of one server. I personally think Node.Js is worth checking out since it allows clustering which is extremely powerful. If you want to stick with python, the best way to get memory usage in python is using psutil like this: memory = psutil.virtual_memory().percent, but for the CPU usage you would have to do some sort of caching or multi threading because you cannot get the CPU usage without a delay cpu = psutil.cpu_percent(interval=1). If you want your API to be fast then the periodic approach is bad, it will slow down your server, also if you do anything wrong on the client side, you could end up DDOSing your API, which is an embarrassing thing that I did when I first published my app. The best approach is to only call the API when it is needed, and for example, flutter has cached widgets which was very useful, because I would have to fetch that piece of data only once every few hours.

Key Points:

-Only call the API when it is crucial to do so.

-Python cannot get the CPU usage in real-time.

-Node performed better than my Flask API (not FastAPI).

-Use client-side caching if possible.



Related Topics



Leave a reply



Submit