How to Prevent Tensorflow from Allocating the Totality of a Gpu Memory

How to prevent tensorflow from allocating the totality of a GPU memory?

You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:

# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.

Why tensorflow modules taking up all the GPU memory?

Further discuss can be found at https://www.tensorflow.org/guide/gpu , You should read it .

How can I solve 'ran out of gpu memory' in TensorFlow

It's not about that. first of all you can see how much memory it gets when it runs by monitoring your gpu. for example if you have a nvidia gpu u can check that with watch -n 1 nvidia-smi command.
But in most cases if you didn't set the maximum fraction of gpu memory, it allocates almost the whole free memory. your problem is lack of enough memory for your gpu. cnn networks are totally heavy. When you are trying to feed your network DO NOT do it with your whole data. DO this feeding procedure in low batch sizes.

GPU memory nearly full after defining tf.distribute.MirroredStrategy?

By default, Tensorflow will map almost all of your GPU memory: official guide. This is for performance reasons: by allocating the GPU memory, it reduces latency that memory growth would typically cause.

You can try using tf.config.experimental.set_memory_growth to prevent it from immediately filling up all its memory. There are also some good explanations on this StackOverflow post.

How to prevent tensorflow from allocating the totality of a GPU memory?

You can set the fraction of GPU memory to be allocated when you construct a tf.Session by passing a tf.GPUOptions as part of the optional config argument:

# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)

sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

The per_process_gpu_memory_fraction acts as a hard upper bound on the amount of GPU memory that will be used by the process on each GPU on the same machine. Currently, this fraction is applied uniformly to all of the GPUs on the same machine; there is no way to set this on a per-GPU basis.



Related Topics



Leave a reply



Submit