Total Memory Used by Python Process

Total memory used by Python process?

Here is a useful solution that works for various operating systems, including Linux, Windows, etc.:

import os, psutil
process = psutil.Process(os.getpid())
print(process.memory_info().rss) # in bytes

Notes:

  • do pip install psutil if it is not installed yet

  • handy one-liner if you quickly want to know how many MB your process takes:

    import os, psutil; print(psutil.Process(os.getpid()).memory_info().rss / 1024 ** 2)
  • with Python 2.7 and psutil 5.6.3, it was process.memory_info()[0] instead (there was a change in the API later).

How to measure total memory usage of all processes of a user in Python

It's pretty simple using psutil. You can just iterate over all processes, select the ones that are owned by you and sum the memory returned by memory_info().

import psutil
import getpass

user = getpass.getuser()

total = sum(p.memory_info()[0] for p in psutil.process_iter()
if p.username() == user)

print('Total memory usage in bytes: ', total)

Find total memory used by Python process and all its children

You can use the result from psutil.Process.children() (or psutil.Process.get_children() for older psutil versions) to get all child processes and iterate over them.

It could then look like:

import os
import psutil
current_process = psutil.Process(os.getpid())
mem = current_process.memory_percent()
for child in current_process.children(recursive=True):
mem += child.memory_percent()

This would sum the percentages of memory used by the main process, its children (forks) and any children's children (if you use recursive=True). You can find this function in the current psutil docs or the old docs.

If you use an older version of psutil than 2 you have to use get_children() instead of children().

How to see how much memory is using Python?

There is no in-built way to do this short of making an external system call to get back information about the current process' memory usage, such as reading /proc/meminfo for the current process directly in Linux.

If you are content with a Unix-only solution in the standard library that can return only the peak resident memory used, you are looking for resource.getrusage(resource.RUSAGE_SELF).ru_maxrss.

This function returns an object that describes the resources consumed by either the current process or its children...

>>> resource.getrusage(resource.RUSAGE_SELF)
resource.struct_rusage(ru_utime=0.058433,
ru_stime=0.021911999999999997, ru_maxrss=7600, ru_ixrss=0,
ru_idrss=0, ru_isrss=0, ru_minflt=2445, ru_majflt=1, ru_nswap=0,
ru_inblock=256, ru_oublock=0, ru_msgsnd=0, ru_msgrcv=0, ru_nsignals=0,
ru_nvcsw=148, ru_nivcsw=176)

This will not be able to tell you how much memory is being allocated between invocations, but it may be useful to track the growth in peak memory used over the lifetime of an application.

Some Python profilers written in C have been developed to interface directly with CPython that are capable of retrieving information about the total memory used. One example is Heapy, which also possesses graphical plotting abilities.

If you only want to track down the memory consumed by new objects as they have been added to the stack, you can always use sys.getsizeof() on each new object to get back a running total of space allocated.

Tracking *maximum* memory usage by a Python function

This question seemed rather interesting and it gave me a reason to look into Guppy / Heapy, for that I thank you.

I tried for about 2 hours to get Heapy to do monitor a function call / process without modifying its source with zero luck.

I did find a way to accomplish your task using the built in Python library resource. Note that the documentation does not indicate what the RU_MAXRSS value returns. Another SO user noted that it was in kB. Running Mac OSX 7.3 and watching my system resources climb up during the test code below, I believe the returned values to be in Bytes, not kBytes.

A 10000ft view on how I used the resource library to monitor the library call was to launch the function in a separate (monitor-able) thread and track the system resources for that process in the main thread. Below I have the two files that you'd need to run to test it out.

Library Resource Monitor - whatever_you_want.py

import resource
import time

from stoppable_thread import StoppableThread

class MyLibrarySniffingClass(StoppableThread):
def __init__(self, target_lib_call, arg1, arg2):
super(MyLibrarySniffingClass, self).__init__()
self.target_function = target_lib_call
self.arg1 = arg1
self.arg2 = arg2
self.results = None

def startup(self):
# Overload the startup function
print "Calling the Target Library Function..."

def cleanup(self):
# Overload the cleanup function
print "Library Call Complete"

def mainloop(self):
# Start the library Call
self.results = self.target_function(self.arg1, self.arg2)

# Kill the thread when complete
self.stop()

def SomeLongRunningLibraryCall(arg1, arg2):
max_dict_entries = 2500
delay_per_entry = .005

some_large_dictionary = {}
dict_entry_count = 0

while(1):
time.sleep(delay_per_entry)
dict_entry_count += 1
some_large_dictionary[dict_entry_count]=range(10000)

if len(some_large_dictionary) > max_dict_entries:
break

print arg1 + " " + arg2
return "Good Bye World"

if __name__ == "__main__":
# Lib Testing Code
mythread = MyLibrarySniffingClass(SomeLongRunningLibraryCall, "Hello", "World")
mythread.start()

start_mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
delta_mem = 0
max_memory = 0
memory_usage_refresh = .005 # Seconds

while(1):
time.sleep(memory_usage_refresh)
delta_mem = (resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) - start_mem
if delta_mem > max_memory:
max_memory = delta_mem

# Uncomment this line to see the memory usuage during run-time
# print "Memory Usage During Call: %d MB" % (delta_mem / 1000000.0)

# Check to see if the library call is complete
if mythread.isShutdown():
print mythread.results
break;

print "\nMAX Memory Usage in MB: " + str(round(max_memory / 1000.0, 3))

Stoppable Thread - stoppable_thread.py

import threading
import time

class StoppableThread(threading.Thread):
def __init__(self):
super(StoppableThread, self).__init__()
self.daemon = True
self.__monitor = threading.Event()
self.__monitor.set()
self.__has_shutdown = False

def run(self):
'''Overloads the threading.Thread.run'''
# Call the User's Startup functions
self.startup()

# Loop until the thread is stopped
while self.isRunning():
self.mainloop()

# Clean up
self.cleanup()

# Flag to the outside world that the thread has exited
# AND that the cleanup is complete
self.__has_shutdown = True

def stop(self):
self.__monitor.clear()

def isRunning(self):
return self.__monitor.isSet()

def isShutdown(self):
return self.__has_shutdown

###############################
### User Defined Functions ####
###############################

def mainloop(self):
'''
Expected to be overwritten in a subclass!!
Note that Stoppable while(1) is handled in the built in "run".
'''
pass

def startup(self):
'''Expected to be overwritten in a subclass!!'''
pass

def cleanup(self):
'''Expected to be overwritten in a subclass!!'''
pass


Related Topics



Leave a reply



Submit