In TensorFlow, what is the difference between Session.run() and Tensor.eval()?
If you have a Tensor
t, calling t.eval()
is equivalent to calling tf.get_default_session().run(t)
.
You can make a session the default as follows:
t = tf.constant(42.0)
sess = tf.Session()
with sess.as_default(): # or `with sess:` to close on exit
assert sess is tf.get_default_session()
assert t.eval() == sess.run(t)
The most important difference is that you can use sess.run()
to fetch the values of many tensors in the same step:
t = tf.constant(42.0)
u = tf.constant(37.0)
tu = tf.mul(t, u)
ut = tf.mul(u, t)
with sess.as_default():
tu.eval() # runs one step
ut.eval() # runs one step
sess.run([tu, ut]) # evaluates both tensors in a single step
Note that each call to eval
and run
will execute the whole graph from scratch. To cache the result of a computation, assign it to a tf.Variable
.
The difference between sess.run(c) and c.eval() in Tensorflow
When you call c.eval()
on a tensor, you are basically calling tf.get_default_session().run(c)
. It is a convenient shortcut.
However, Session.run()
is much more general.
- It allows you to query several outputs at once:
sess.run([a, b, ...])
. When those outputs are related and depend on a state that may change, it is important to get them simultaneously to have a consistent result. People are regularly surprised by this [1], [2]. Session.run()
can take a few parameters thatTensor.eval()
does not have, such asRunOptions
, that can be useful for debugging or profiling.- Note however that
eval()
can take afeed_dict
.
- Note however that
eval()
is a property ofTensor
s. ButOperation
s such asglobal_variables_initializer()
on the other hand do not have aneval()
but arun()
(another convenient shortcut).Session.run()
can run both.
In Tensorflow, what is the difference between Session.partial_run and Session.run?
With tf.Session.run
, you usually give some inputs and expected outputs, and TensorFlow runs the operations in the graph to compute and return those outputs. If you later want to get some other output, even if it is with the same input, you have to run again all the necessary operations in the graph, even if some intermediate results will be the same as in the previous call. For example, consider something like this:
import tensorflow as tf
input_ = tf.placeholder(tf.float32)
result1 = some_expensive_operation(input_)
result2 = another_expensive_operation(result1)
with tf.Session() as sess:
x = ...
sess.run(result1, feed_dict={input_: x})
sess.run(result2, feed_dict={input_: x})
Computing result2
will require to run both the operations from some_expensive_operation
and another_expensive_operation
, but actually most of the computation is repeated from when result1
was calculated. tf.Session.partial_run
allows you to evaluate part of a graph, leave that evaluation "on hold" and complete it later. For example:
import tensorflow as tf
input_ = tf.placeholder(tf.float32)
result1 = some_expensive_operation(input_)
result2 = another_expensive_operation(result1)
with tf.Session() as sess:
x = ...
h = sess.partial_run_setup([result1, result2], [input_ ])
sess.partial_run(h, result1, feed_dict={input_: x})
sess.partial_run(h, result2)
Unlike before, here the operations from some_expensive_operation
will only we run once in total, because the computation of result2
is just a continuation from the computation of result1
.
This can be useful in several contexts, for example if you want to split the computational cost of a run into several steps, but also if you need to do some mid-evaluation checks out of TensorFlow, such as computing an input to the second half of the graph that depends on an output of the first half, or deciding whether or not to complete an evaluation depending on an intermediate result (these may also be implemented within TensorFlow, but there may be cases where you do not want that).
Note too that it is not only a matter of avoiding repeating computation. Many operations have a state that changes on each evaluation, so the result of two separate evaluations and one evaluation divided into two partial ones may actually be different. This is the case with random operations, where you get a new different value per run, and other stateful object like iterators. Variables are also obviously stateful, so operations that change variables (like tf.Session.assign
or optimizers) will not produce the same results when they are run once and when they are run twice.
In any case, note that, as of v1.12.0, partial_run
is still an experimental feature and is subject to change.
Session.run() /Tensor.eval() of Tensorflow run for a crazy long time
1) As a basic sanity check: ls -al /Users/me/Downloads/cifar-10-batches-bin/data_batch_1.bin
2) Don't forget to:
init = tf.initialize_all_variables()
sess.run(init)
3) tf.train.start_queue_runners()
(after creating your session)
It's probably #3. The string_input_producer
adds a queue runner to the QUEUE_RUNNERS
collection, which needs to be started.
eval() and run() in tensorflow
If you have only one default session, they are basically the same.
From https://github.com/tensorflow/tensorflow/blob/v1.12.0/tensorflow/python/framework/ops.py#L2351:
op.run() is a shortcut for calling tf.get_default_session().run(op)
From https://github.com/tensorflow/tensorflow/blob/v1.12.0/tensorflow/python/framework/ops.py#L691:
t.eval() is a shortcut for calling tf.get_default_session().run(t)
Difference between Tensor and Operation:
Tensor: https://www.tensorflow.org/api_docs/python/tf/Tensor
Operation: https://www.tensorflow.org/api_docs/python/tf/Operation
Note: the Tensor class will be replaced by Output in the future. Currently these two are aliases for each other.
TensorFlow - Diffrence between Session() and Session(Graph())
When designing a Model in Tensorflow, there are basically 2 steps
- building the computational graph, the nodes and operations and how
they are connected to each other - evaluating / running this graph on
some data
A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated. For example:
# Launch the graph in a session.
sess = tf.Session()
# Evaluate the tensor `c`.
print(sess.run(c))
When you create a Session you're placing a graph into a specified device and If no graph is specified, the Session constructor tries to build a graph using the default one .
sess = tf.Session()
Else during initializing tf.Session(), you can pass in a graph like tf.Session(graph=my_graph)
with tf.Session(graph=my_graph) as sess:
https://www.tensorflow.org/api_docs/python/tf/Session
- https://www.tensorflow.org/api_docs/python/tf/Graph
- https://github.com/Kulbear/tensorflow-for-deep-learning-research/issues/1
Related Topics
How to Efficiently Handle European Decimal Separators Using the Pandas Read_CSV Function
Django 1.7 - "No Migrations to Apply" When Run Migrate After Makemigrations
Check If Value Already Exists Within List of Dictionaries
Fitting a Histogram with Python
Building Lxml for Python 2.7 on Windows
Sorting a Dictionary with Lists as Values, According to an Element from the List
Nested Ssh Session with Paramiko
Argparse with Required Subparser
Ipython Notebook Clear Cell Output in Code
Attributeerror: 'Client' Object Has No Attribute 'Send_Message' (Discord Bot)
Cx_Freeze Crashing Python 3.7.0
How to Use MySQLdb with Python and Django in Osx 10.6
Finding a Substring Within a List in Python
How to Find the First Key in a Dictionary
Anaconda/Conda - Install a Specific Package Version
How to Use a Multiprocessing Queue in a Function Called by Pool.Imap
How to Split/Partition a Dataset into Training and Test Datasets For, E.G., Cross Validation