TensorFlow使用C++加载使用训练好的模型,.cc文件代码实现的相关类及方法总结

在官网API和Tensorflow源码头文件中查看获取。

同时参考

https://medium.com/jim-fleming/loading-a-tensorflow-graph-with-the-c-api-4caaff88463f

https://vimsky.com/article/3600.html
1.整体逻辑

 
 
/// ```c++
/// tensorflow::GraphDef graph;
/// // ... Create or load graph into "graph".
///
/// // This example uses the default options which connects
/// // to a local runtime.
/// tensorflow::SessionOptions options;
/// std::unique_ptr<tensorflow::Session>
/// session(tensorflow::NewSession(options));
///
/// // Create the session with this graph.
/// tensorflow::Status s = session->Create(graph);
/// if (!s.ok()) { ... }
///
/// // Run the graph and fetch the first output of the "output"
/// // operation, and also run to but do not return anything
/// // for the "update_state" operation.
/// std::vector<tensorflow::Tensor> outputs;
/// s = session->Run({}, {"output:0"}, {"update_state"}, &outputs);
/// if (!s.ok()) { ... }
///
/// // Map the output as a flattened float tensor, and do something
/// // with it.
/// auto output_tensor = outputs[0].flat<float>();
/// if (output_tensor(0) > 0.5) { ... }
///
/// // Close the session to release the resources associated with
/// // this session.
/// session->Close();
 
 2.代码所用Run函数 
 
/// Runs the graph with the provided input tensors and fills
/// `outputs` for the endpoints specified in `output_tensor_names`.
/// Runs to but does not return Tensors for the nodes in
/// `target_node_names`.
///
/// The order of tensors in `outputs` will match the order provided
/// by `output_tensor_names`.
///
/// If `Run` returns `OK()`, then `outputs->size()` will be equal to
/// `output_tensor_names.size()`. If `Run` does not return `OK()`, the
/// state of `outputs` is undefined.
///
/// REQUIRES: The name of each Tensor of the input or output must
/// match a "Tensor endpoint" in the `GraphDef` passed to `Create()`.
///
/// REQUIRES: At least one of `output_tensor_names` and
/// `target_node_names` must be non-empty.
///
/// REQUIRES: outputs is not nullptr if `output_tensor_names` is non-empty.
virtual Status Run(const std::vector<std::pair<string, Tensor> >& inputs,
const std::vector<string>& output_tensor_names,
const std::vector<string>& target_node_names,
std::vector<Tensor>* outputs) = 0;
3.NewSession
/// Create a new session with the given options.
///
/// If session creation succeeds, the new `Session` will be stored in
/// `*out_session`, the caller will take ownership of the returned
/// `*out_session`, and this function will return `OK()`. Otherwise, this
/// function will return an error status.
Status NewSession(const SessionOptions& options, Session** out_session);
4.ReadBinaryProto
///Return: Status
///Reads contents of named file and parse as binary encoded proto data and store into *proto.
Status ReadBinaryProto(Env *env, const string & fname, ::tensorflow::protobuf::MessageLite *proto)
5.Tensor
/// Creates a 1-dimensional, 0-element float tensor.
///
/// The returned Tensor is not a scalar (shape {}), but is instead
/// an empty one-dimensional Tensor (shape {0}, NumElements() ==
/// 0). Since it has no elements, it does not need to be assigned a
/// value and is initialized by default (IsInitialized() is
/// true). If this is undesirable, consider creating a one-element
/// scalar which does require initialization:
/// 创建一维,0元素浮动张量。 返回的张量不是一个标量(形状{}),而是一个空的一维张量(形状{0},NumElements()== 0)。
/// 由于它没有元素,所以不需要赋值并且默认初始化(IsInitialized()为true)。 如果这是不可取的,可以考虑创建一个需要初始化的单元素标量:
/// ```c++
///
/// Tensor(DT_FLOAT, TensorShape({}))
///
/// ```
Tensor();
/// Creates a Tensor of the given `type` and `shape`. If
/// LogMemory::IsEnabled() the allocation is logged as coming from
/// an unknown kernel and step. Calling the Tensor constructor
/// directly from within an Op is deprecated: use the
/// OpKernelConstruction/OpKernelContext allocate_* methods to
/// allocate a new tensor, which record the kernel and step.
///
/// The underlying buffer is allocated using a `CPUAllocator`.
/// 创建给定的“类型”和“形状”的张量。 如果LogMemory :: IsEnabled()将分配记录为来自未知内核和步骤。 直接从Op内调用张量构造函数已被废弃:
/// 使用OpKernelConstruction / OpKernelContext allocate_ *方法分配一个新的张量,记录内核和步骤。 
/// 基础缓冲区是使用“CPUAllocator”分配的。
Tensor(DataType type, const TensorShape& shape);
6.Tensor的vec,matrix,tensor函数
/// Return the tensor data as an `Eigen::Tensor` with the type and
/// sizes of this `Tensor`.
///
/// Use these methods when you know the data type and the number of
/// dimensions of the Tensor and you want an `Eigen::Tensor`
/// automatically sized to the `Tensor` sizes. The implementation check
/// fails if either type or sizes mismatch.m
/// 用张量的类型和大小将张量数据作为“Eigen :: Tensor”返回。 当您知道张量的数据类型和维数时,
/// 可以使用这些方法,并且希望“Eigen :: Tensor”自动调整为“Tensor”的大小。 如果任一类型或尺寸不匹配,则执行检查将失败。
/// Example:
/// ```c++
/// typedef float T;
/// Tensor my_mat(...built with Shape{rows: 3, cols: 5}...);
/// auto mat = my_mat.matrix<T>(); // 2D Eigen::Tensor, 3 x 5.
/// auto mat = my_mat.tensor<T, 2>(); // 2D Eigen::Tensor, 3 x 5.
/// auto vec = my_mat.vec<T>(); // CHECK fails as my_mat is 2D.
/// auto vec = my_mat.tensor<T, 3>(); // CHECK fails as my_mat is 2D.
/// auto mat = my_mat.matrix<int32>();// CHECK fails as type mismatch.
/// ```
///定义一维
template <typename T>
typename TTypes<T>::Vec vec() {
return tensor<T, 1>();
}
///定义二维
template <typename T>
typename TTypes<T>::Matrix matrix() {
return tensor<T, 2>();
}
///定义指定维度
template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor tensor();
///
方法中的参数为Eigen::Tensor类型,具体使用参考自:
http://blog.csdn.net/hjimce/article/details/71710893
7.Tensor的scalar函数
/// Return the Tensor data as a `TensorMap` of fixed size 1:
/// `TensorMap<TensorFixedSize<T, 1>>`.
/// Using `scalar()` allows the compiler to perform optimizations as
/// the size of the tensor is known at compile time.
template <typename T>
typename TTypes<T>::Scalar scalar();
///scalar函数使用的例子
x.scalar<float>() ()= 10.0;

这里主要为.cc文件代码实现的类和相关方法


猜你喜欢

转载自blog.csdn.net/badmushroom/article/details/78720582