How to Run Tensorflow on Cpu

How to run Tensorflow on CPU

You can apply device_count parameter per tf.Session:

config = tf.ConfigProto(
device_count = {'GPU': 0}
)
sess = tf.Session(config=config)

See also protobuf config file:

tensorflow/core/framework/config.proto

Run Tensorflow on CPU only

After a lot of crying, I realized that I could use a docker image.

Python how to use tensorflow-cpu

Regardless of the tensorflow flavor you install (CPU or GPU), you always import tensorflow just as tensorflow, as shown below:

import tensorflow as tf

Tensorflow 2: how to switch execution from GPU to CPU and back?

You can use tf.device to explicitly set which device you want to use. For example:

import tensorflow as tf    

model = tf.keras.Model(...)

# Run training on GPU
with tf.device('/gpu:0'):
model.fit(...)

# Run inference on CPU
with tf.device('/cpu:0'):
model.predict(...)

If you only have one CPU and one GPU, the names used above should work. Otherwise, device_lib.list_local_devices() can give you a list of your devices. This post gives a nice function for listing just the names, which I adapt here to also show CPUs:

from tensorflow.python.client import device_lib

def get_available_devices():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos if x.device_type == 'GPU' or x.device_type == 'CPU']

How can I run tensorflow without GPU?

It should work. It mainly disables the CUDA device. So, the code looks for other sources (CPU) to run the code.

import os
import tensorflow as tf
#os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID" #If the line below doesn't work, uncomment this line (make sure to comment the line below); it should help.
os.environ['CUDA_VISIBLE_DEVICES'] = '-1'
#Your Code Here

Run TensorFlow 2.0 on CPU without AVX

There is a brand new wheel file in the repository:

https://github.com/fo40225/tensorflow-windows-wheel

The following file is working very well:

https://github.com/fo40225/tensorflow-windows-wheel/blob/master/2.0.0/py37/GPU/cuda101cudnn76sse2/tensorflow_gpu-2.0.0-cp37-cp37m-win_amd64.whl

As stated in the Readme.md:

"It will take time for compiling when execute TensorFlow first time."

Take a look at this test:

>>>import tensorflow as tf
tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll

>>>print(tf.__version__)
2.0.0

>>>from tensorflow.python.client import device_lib
>>>print(device_lib.list_local_devices())

tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
tensorflow/core/common_runtime/gpu/gpu_device.cc:1618] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.531
GPU libraries are statically linked, skip dlopen check.
Adding visible gpu devices: 0
Device interconnect StreamExecutor with strength 1 edge matrix:
0
0: N
Created TensorFlow device (/device:GPU:0 with 1340 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 4456898788177247918
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 1406107238
locality {
bus_id: 1
links {
}
}
incarnation: 3224787151756357043
physical_device_desc: "device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1"
]


Related Topics



Leave a reply



Submit