Tensorflow: Convert Tensor to Numpy Array Without .Eval() or Sess.Run()

Tensorflow: Convert Tensor to numpy array WITHOUT .eval() or sess.run()

The fact that you say "already have a session running" implies a misunderstanding of what sess.run() actually does.

If you have a tf.Session() initiated, you should be able to use it to retrieve any tensor using sess.run(). If you need to retrieve a variable or constant tensor this is very straight forward.

value = sess.run(tensor_to_retrieve)

If the tensor is the result of operations on placeholder tensors, you will need to pass them in with feed_dict.

value = sess.run(tensor, feed_dict={input_placeholder: input_value})

There is nothing preventing you from calling sess.run() more than once.

Tensorflow: Tensor to numpy array conversion without running any session

Updated

# must under eagar mode
def tensor_to_array(tensor1):
return tensor1.numpy()

example

>>> import tensorflow as tf
>>> tf.enable_eager_execution()
>>> def tensor_to_array(tensor1):
... return tensor1.numpy()
...
>>> x = tf.constant([1,2,3,4])
>>> tensor_to_array(x)
array([1, 2, 3, 4], dtype=int32)

I believe you can do it without tf.eval() or sess.run by using tf.enable_eager_execution()

example

import tensorflow as tf
import numpy as np
tf.enable_eager_execution()
x = np.array([1,2,3,4])
c = tf.constant([4,3,2,1])
c+x
<tf.Tensor: id=5, shape=(4,), dtype=int32, numpy=array([5, 5, 5, 5], dtype=int32)>

For more details about tensorflow eager mode, checkout here:Tensorflow eager

If without tf.enable_eager_execution():

import tensorflow as tf
import numpy as np
c = tf.constant([4,3,2,1])
x = np.array([1,2,3,4])
c+x
<tf.Tensor 'add:0' shape=(4,) dtype=int32>

How to convert tensor to numpy array the result of matmul

Any tensor returned by Session.run or eval is a NumPy array.
So tensor to numpy array you can simply run .eval() on the transformed tensor.

i.e:

 sess = tf.Session()
fc1.eval(session=sess)

error in converting tensor to numpy array

Just adding to (or elaborating on) what @MatiasValdenegro said,

TensorFlow follows something called graph execution (or define-then-run). In other words, when you write a TensorFlow program it defines something called a data-flow graph which shows how the operations you defined are related to each other. And then you execute bits and pieces of that graph depending on the results you're after.

Let's consider two examples. (I am switching to a simple TensorFlow program instead of Keras bits as it makes things more clear - After all K.get_session() returns a Session object).

Example 1

Say you have the following program.

import tensorflow as tf

a = tf.placeholder(shape=[2,2], dtype=tf.float32)
b = tf.constant(1, dtype=tf.float32)
c = a * b

# Wrong: This is what you're doing essentially when you do sess.run(input_image)
with tf.Session() as sess:
print(sess.run(c))

# Right: You need to feed values that c is dependent on
with tf.Session() as sess:
print(sess.run(c, feed_dict={a: np.array([[1,2],[2,3]])}))

Whenever a resulting tensor (e.g. c) is dependent on a placeholder you cannot execute it and get the result without feeding values to all the dependent placeholders.

Example 2

When you define a tf.constant(1) this is not dependent on anything. In other words you don't need a feed_dict and can directly run eval() or sess.run() on it.

Update: Further explanation on why you need a feed_dict for input_image

TLDR: You need a feed_dict because your resulting Tensor is produced by an Input layer.

Your input_image is basically the resulting tensor you get by feeding something to the Input layer. Usually in Keras, you are not exposed to the internal placeholder level details. But you would do that via using model.fit() or model.evaluate(). You can see that Keras Input layer in fact uses a placeholder by analysing this line.

Hope I made my point clear that you do need to feed in a value to the placeholder to successfully evaluate the output of an Input layer. Because that basically holds a placeholder.

Update 2: How to feed to your Input layer

So, appears you can use feed_dict with Keras Input layer in the following manner. Instead of defining shape argument you straight away pass a placeholder to the tensor argument, which will bypass the internal placeholder creation in the layer.

from tensorflow.keras.layers import InputLayer
import numpy as np
import tensorflow.keras.backend as K

x = tf.placeholder(shape=[None, None, None, 3], dtype=tf.float32)
input_image = Input(tensor=x)
arr = np.array([[[[1,1,1]]]])
print(arr.shape)
print(K.get_session().run(input_image, feed_dict={x: arr}))

How to convert "tensor" to "numpy" array in tensorflow?

You can't use the .numpy method on a tensor, if this tensor is going to be used in a tf.data.Dataset.map call.

The tf.data.Dataset object under the hood works by creating a static graph: this means that you can't use .numpy() because the tf.Tensor object when in a static-graph context do not have this attribute.

Therefore, the line input_image = random_noise(image.numpy()) should be input_image = random_noise(image).

But the code is likely to fail again since random_noise calls get_noise from the model.utils package.
If the get_noise function is written using Tensorflow, then everything will work. Otherwise, it won't work.

The solution? Write the code using only the Tensorflow primitives.

For instance, if your function get_noise just creates random noise with the shee of your input image, you can define it like:

def get_noise(image):
return tf.random.normal(shape=tf.shape(image))

using only the Tensorflow primitives, and it will work.

Hope this overview helps!

P.S: you could be interested in having a look at the articles "Analyzing tf.function to discover AutoGraph strengths and subtleties" - they cover this aspect (perhaps part 3 is the one related to your scenario): part 1 part 2 part 3



Related Topics



Leave a reply



Submit