How to Allocate Array With Shape and Data Type

Unable to allocate array with shape and data type

This is likely due to your system's overcommit handling mode.

In the default mode, 0,

Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. The root is allowed to allocate slightly more memory in this mode. This is the default.

The exact heuristic used is not well explained here, but this is discussed more on Linux over commit heuristic and on this page.

You can check your current overcommit mode by running

$ cat /proc/sys/vm/overcommit_memory
0

In this case, you're allocating

>>> 156816 * 36 * 53806 / 1024.0**3
282.8939827680588

~282 GB and the kernel is saying well obviously there's no way I'm going to be able to commit that many physical pages to this, and it refuses the allocation.

If (as root) you run:

$ echo 1 > /proc/sys/vm/overcommit_memory

This will enable the "always overcommit" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least).

I tested this myself on a machine with 32 GB of RAM. With overcommit mode 0 I also got a MemoryError, but after changing it back to 1 it works:

>>> import numpy as np
>>> a = np.zeros((156816, 36, 53806), dtype='uint8')
>>> a.nbytes
303755101056

You can then go ahead and write to any location within the array, and the system will only allocate physical pages when you explicitly write to that page. So you can use this, with care, for sparse arrays.

MemoryError: Unable to allocate 30.4 GiB for an array with shape (725000, 277, 76) and data type float64

Taking into account the dimensions of the dataset you provided (725000 x 277 x 76) and its data type (float64 - 8 bytes), it seems that you need (at minimum) around 114 GB to have the dataset loaded/stored in RAM.

A solution to overcome this limitation is to: 1) read a certain amount of the dataset (e.g. a chunk of 1 GB at the time) through a hyperslab selection and load/store it in memory, 2) process it, and 3) repeat the process (i.e. go to step 1) until the dataset is completely processed. This way, you will not run out of RAM memory.

MemoryError: Unable to allocate MiB for an array with shape and data type, when using anymodel.fit() in sklearn

Upgrading python-64 bit seems to have solved all the "Memory Error" problem.

MemoryError: unable to allocate array with shape and data type float32 while using word2vec in python

Ideally, you should paste the text of your error into your question, rather than a screenshot. However, I see the two key lines:

<TIMESTAMP> : INFO : estimated required memory for 2372206 words and 400 dimensions: 8777162200 bytes
...
MemoryError: unable to allocate array with shape (2372206, 400) and data type float32

After making one pass over your corpus, the model has learned how many unique words will survive, which reports how large of a model must be allocated: one taking about 8777162200 bytes (about 8.8GB). But, when trying to allocate the required vector array, you're getting a MemoryError, which indicates not enough computer addressable-memory (RAM) is available.

You can either:

  1. run where there's more memory, perhaps by adding RAM to your existing system; or
  2. reduce the amount of memory required, chiefly by reducing either the number of unique word-vectors you'd like to train, or their dimensional size.

You could reduce the number of words by increasing the default min_count=5 parameter to something like min_count=10 or min_count=20 or min_count=50. (You probably don't need over 2 million word-vectors – many interesting results are possible with just a vocabulary of a few tens-of-thousands of words.)

You could also set a max_final_vocab value, to specify an exact number of unique words to keep. For example, max_final_vocab=500000 would keep just the 500000 most-frequent words, ignoring the rest.

Reducing the size will also save memory. A setting of size=300 is popular for word-vectors, and would reduce the memory requirements by a quarter.

Together, using size=300, max_final_vocab=500000 should trim the required memory to under 2GB.

Unable to allocate memory with array shape to create reinforcement learning model

It looks like you simply don't have enough RAM to allocate 229 GiB for an array of that size---which is incredibly large---very few computers could.

Have you tried batching your idea into batches of either 64, 128, 256, etc.? That is a very common way to decrease the memory load, and you can experiment with different values to see what computation you can handle. Tensorflow has a great deal of built-in methods that can help you here. One place to look would be the batch method here.



Related Topics



Leave a reply



Submit