Tensorflow Different Ways to Export and Run Graph in C++

Difference between exporting inference graph and freezing graph?

To my understanding, export_inference_graph is a customized script for the object detection research project.(You will have object detection model, config files, stuff like that...)

The freeze_graph is used to freeze general models: remove unnecessary nodes, convert vars into constants, etc.

So in your case, you will need to use freeze_graph for general models.

Error while sequentially exporting two different inference graphs in tensorflow object detection API

I dig deep into the Tensorflow directory and reached to method _export_inference_graph. The path is TensorFlow/models/research/object_detection/exporter.py. Adding this line at the end of the function solved my problem.

tf.reset_default_graph()

Tensorflow export and reuse Estimator object in Python

  1. Yes, see this doc https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md. In particular, the APIs.Loader.Python section.
  2. What do you mean by "compile an estimator"? All the data used by estimator is saved into SavedModel. The reset is simply high-level orchestrating logic. Actual operations like matrix multiplications are provided by the C++ library and run on whatever hardware you have - CPU, GPU, or TPU. XLA is a pretty low-level compiler, far from the Estimator API. For more info about it search for a talk about it "XLA: TensorFlow, Compiled! (TensorFlow Dev Summit 2017)"
  3. The link above provides a very high-level API. For the lower layer, see https://www.tensorflow.org/programmers_guide/meta_graph. At even lower layer, there is GraphDef (see links in the meta_graph page)

Running trained tensorflow model in C++

Instructions for using a graph in C++ can be found here.

Here is some code to use your image as input:

tensorflow::Tensor keep_prob = tensorflow::Tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape());
keep_prob.scalar<float>()() = 1.0;

tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,height,width,depth}));
auto input_tensor_mapped = input_tensor.tensor<float, 4>();
const float * source_data = (float*) img.data; // here img is an opencv image, but if it's just a float array this code is very easy to adapt
// copying the image data into the corresponding tensor
for (int y = 0; y < height; ++y) {
const float* source_row = source_data + (y * width * depth);
for (int x = 0; x < width; ++x) {
const float* source_pixel = source_row + (x * depth);
for (int c = 0; c < depth; ++c) {
const float* source_value = source_pixel + c;
input_tensor_mapped(0, y, x, c) = *source_value;
}
}
}
std::vector<tensorflow::Tensor> finalOutput;

tensorflow::Status run_status = this->tf_session->Run({{InputName,input_tensor},
{dropoutPlaceHolderName, keep_prob}},
{OutputName},
{},
&finalOutput);


Related Topics



Leave a reply



Submit