TensorFlow does not recompile the source code to use C/C++ API reasoning

TensorFlow can use pip to install the tensorflow package and then call its python interface, or use its C++ or C api for inference. Due to performance or business factors, some users choose C/C++ interface for reasoning. C interface reasoning tensorflow provides precompiled header files and so ( https://www.tensorflow.org/install/lang_c ), its disadvantages It is not convenient to call the C++ interface of TensorFlow. The C++ interface usually requires the user to recompile based on the source code, which is time-consuming and laborious (refer to Tensorflow C API From Training to Deployment: Using C API for Prediction and Deployment - Technical Liu    Using C++ to Call TensorFlow Model Simple Description | Dannyw's Blog and other blogs).

If you develop C++ code and link so under the Tensorflow installation directory installed by pip, the following error will be reported:
E tensorflow/core/common_runtime/session.cc:67] Not found: No session factory registered for the given session options: {target: "" config: } Registered factories are {}.
At the same time, it will be found that the operators inside TensorFlow are not registered, even if it is processed with -Wl,--whole-archive, it cannot be resolved.

So is it possible to directly use the so and header files of tensorflow installed by pip to implement C++ interface call reasoning? The author found a method and shared it as follows.

main.cpp reasoning code example

#include "tensorflow/core/protobuf/meta_graph.pb.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/platform/env.h"
#include <iostream>
#include <string>

using namespace tensorflow;

#ifdef __cplusplus
extern "C" {
#endif

// instantiated in tensorflow _pywrap_tensorflow_internal.so
extern const char* TF_Version(void);

#ifdef __cplusplus
}
#endif

int main() {
  // must be called to load op register
  TF_Version();

  std::string model_path = "resnet_50.pb";
  tensorflow::GraphDef graphdef;
  tensorflow::Status status_load = ReadBinaryProto(tensorflow::Env::Default(), model_path, &graphdef);

  tensorflow::SessionOptions options;
  tensorflow::Session* session;

  session = tensorflow::NewSession(options);
  if (session == nullptr) {
    std::cout << "create new session failed" << std::endl;
    return -1;
  }
  tensorflow::Status status;
  status = session->Extend(graphdef);
  if (!status.ok()) {
    std::cout << "session extend graph failed" << std::endl;
    return -1;
  }

  Tensor x(DT_FLOAT, TensorShape({1, 3, 224, 224}));

  std::vector<std::pair<std::string, tensorflow::Tensor>> input_tensors;
  input_tensors.push_back({"input", x});

  std::vector<std::string> output_names = {"resnet_model/stage_1/Relu_2"};
  std::vector<Tensor> outputs;
  TF_CHECK_OK(session->Run(input_tensors, output_names, {}, &outputs));

  // release session
  session->Close();
  delete session;
  session = nullptr;

  return 0;
}

The core here is to call TF_Version(); (maybe other functions have similar effects) to successfully load the symbols in so, otherwise it will not be loaded. The specific reasons are welcome to discuss in the comment area. The pip installation package of this function tf 2.x has already provided the interface definition, but 1.1x does not, so it needs to be defined manually.

cmake file compilation options

The core is to include python's so and tf's two so's

project(tf_cpp_test LANGUAGES CXX)

add_compile_options(-fPIC)

# tf version >=1.15 use ABI=0
add_definitions(-D_GLIBCXX_USE_CXX11_ABI=0)

add_executable(
    main
    main.cpp
)

target_include_directories(
    main
    PUBLIC
    $ENV{TF_INCLUDE_PATH}
    $ENV{PYTHON_INCLUDE_PATH}
)

target_link_libraries(
    main 
    PUBLIC
    $ENV{TF_SO_FILE}
    $ENV{TF_SO_PATH}/python/_pywrap_tensorflow_internal.so
    $ENV{PYTHON_SO_FILE}
)

The above TF_INCLUDE_PATH etc. can be obtained through bash script:

#!/bin/bash
TOOL_SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"

export TF_INCLUDE_PATH=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_compile_flags()[0].strip("-I"))')
export TF_SO_PATH=$(python3 -c 'import tensorflow as tf; print(tf.sysconfig.get_link_flags()[0].strip("-L"))')
export TF_SO_FILE=$(ls $TF_SO_PATH/libtensorflow_framework.* |head -1)
export PYTHON_INCLUDE_PATH=$(python3 -c 'import sysconfig; print(sysconfig.get_path("include"))')
export PYTHON_SO_PATH=$(python3 -c 'import sysconfig; print(sysconfig.get_path("stdlib"))')
export PYTHON_SO_FILE=$(find $PYTHON_SO_PATH/../ -name libpython3*.so|head -1)

mkdir ${TOOL_SCRIPT_DIR}/build
cd ${TOOL_SCRIPT_DIR}/build
cmake ..
make

The above code test environment: tf1.15+python3.7 (based on the conda virtual environment)

Guess you like

Origin blog.csdn.net/u013701860/article/details/122241038