How to Increase Jupyter Notebook Memory Limit

How to increase Jupyter notebook Memory limit?

Jupyter notebook has a default memory limit size. You can try to increase the memory limit by following the steps:

1) Generate Config file using command:

jupyter notebook --generate-config
2) Open jupyter_notebook_config.py file situated inside 'jupyter' folder and edit the following property:

NotebookApp.max_buffer_size = your desired value
Remember to remove the '#' before the property value.

3) Save and run the jupyter notebook.
It should now utilize the set memory value.
Also, don't forget to run the notebook from inside the jupyter folder.


Alternatively, you can simply run the Notebook using below command:

 jupyter notebook --NotebookApp.max_buffer_size=your_value

Is there any way to increase memory assigned to jupyter notebook

Yes, you can use the following command after activating your environment:

jupyter notebook --NotbookApp.iopub_Data_Rate_Limit=1e10

If you need more or less memory change 1e10. By default it is 1e6.

Jupyter Notebook Memory Management

There is one basic drawback that you should be aware of: The CPython interpreter actually can actually barely free memory and return it to the OS. For most workloads, you can assume that memory is not freed during the lifetime of the interpreter's process. However, the interpreter can re-use the memory internally. So looking at the memory consumption of the CPython process from the operating system's perspective really does not help at all. A rather common work-around is to run memory intensive jobs in a sub-process / worker process (via multiprocessing for instance) and "only" return the result to the main process. Once the worker dies, the memory is actually freed.

Second, using sys.getsizeof on ndarrays can be impressively misleading. Use the ndarray.nbytes property instead and be aware that this may also be misleading when dealing with views.

Besides, I am not entirely sure why you "pickle" numpy arrays. There are better tools for this job. Just to name two: h5py (a classic, based on HDF5) and zarr. Both libraries allow you to work with ndarray-like objects directly on disk (and compression) - essentially eliminating the pickling step. Besides, zarr also allows you to create compressed ndarray-compatible data structures in memory. Must ufuncs from numpy, scipy & friends will happily accept them as input parameters.

Memory limit in jupyter notebook

Under Linux you can use "cgroups" to limit resources for any software running on your computer.

Install cgroup-tools with apt-get install cgroup-tools

Edit its configuration /etc/cgconfig.conf to make a profile for the particular type of work (e.g. numerical scientific computations):

group app/numwork {
memory {
memory.limit_in_bytes = 500000000;
}
}

Apply that configuration to the process names you care about by listing them in /etc/cgrules.conf (in my case it is all julia executables, which I run through jupyter, but you can use it for any other software too):

*:julia  memory  app/numwork/

Finally, parse the config and set it as the current active config with the following commands:

~# cgconfigparser -l /etc/cgconfig.conf
~# cgrulesengd

I use this to set limits on processes running on a server that is used by my whole class of students.

This page has some more details and other ways to use cgroups https://wiki.archlinux.org/index.php/cgroups



Related Topics



Leave a reply



Submit