tensorflow c++下加载训练好的图和checkpoint的代码是:
const std::string pathToGraph = "./model/model.ckpt-29520.meta"; const std::string checkpointPath = "./model/model.ckpt-29520"; //gpu option tf::SessionOptions session_options; session_options.config.mutable_gpu_options()->set_allow_growth(true); tf::Session* m_sessionFaceNet = NewSession(session_options); // Read in the protobuf graph we exported tf::MetaGraphDef graph_def; tf::Status m_statusFaceNet = ReadBinaryProto(tf::Env::Default(), pathToGraph, &graph_def); if (!m_statusFaceNet.ok()) { //throw std::runtime_error("Error reading graph definition from " + pathToGraph + ": " + m_statusFaceNet.ToString()); return 0; } // Add the graph to the session m_statusFaceNet = m_sessionFaceNet->Create(graph_def.graph_def()); // Read weights from the saved checkpoint tf::Tensor checkpointPathTensor(tf::DT_STRING, tf::TensorShape()); checkpointPathTensor.scalar<std::string>()() = checkpointPath; m_statusFaceNet = m_sessionFaceNet->Run( { { graph_def.saver_def().filename_tensor_name(), checkpointPathTensor }, }, {}, { graph_def.saver_def().restore_op_name() }, nullptr);
在tensorflow1.2下上述代码工作正常,但是由于工作需要,最近将tensorflow升级到了1.8.0,重新编译得到lib,dll后,运行上述代码出现了下述两个链接bug:
error LNK2001: 无法解析的外部符号 "class tensorflow::GraphDefDefaultTypeInternal tensorflow::_GraphDef_default_instance_" (?_GraphDef_default_instance_@tensorflow@@3VGraphDefDefaultTypeInternal@1@A)
main.obj : error LNK2001: 无法解析的外部符号 "class tensorflow::SaverDefDefaultTypeInternal tensorflow::_SaverDef_default_instance_" (?_SaverDef_default_instance_@tensorflow@@3VSaverDefDefaultTypeInternal@1@A)
多种方法折腾之后,找到了解决办法,修改后的代码为:
const std::string pathToGraph = "./model/model.ckpt-29520.meta"; const std::string checkpointPath = "./model/model.ckpt-29520"; //gpu option tf::SessionOptions session_options; session_options.config.mutable_gpu_options()->set_allow_growth(true); /*std::unique_ptr<tensorflow::Session> m_sessionFaceNet; Status load_graph_status = LoadGraph(pathToFrozen, &m_sessionFaceNet); tf::Status m_statusFaceNet;*/ tf::Session* m_sessionFaceNet = NewSession(session_options); // Read in the protobuf graph we exported tf::MetaGraphDef graph_def; tf::Status m_statusFaceNet = ReadBinaryProto(tf::Env::Default(), pathToGraph, &graph_def); if (!m_statusFaceNet.ok()) { //throw std::runtime_error("Error reading graph definition from " + pathToGraph + ": " + m_statusFaceNet.ToString()); return 0; } // Add the graph to the session m_statusFaceNet = m_sessionFaceNet->Create(*(graph_def.mutable_graph_def())); // Read weights from the saved checkpoint tf::Tensor checkpointPathTensor(tf::DT_STRING, tf::TensorShape()); checkpointPathTensor.scalar<std::string>()() = checkpointPath; m_statusFaceNet = m_sessionFaceNet->Run( { { graph_def.mutable_saver_def()->filename_tensor_name(), checkpointPathTensor }, }, {}, { graph_def.mutable_saver_def()->restore_op_name() }, nullptr);
主要的修改是修改了graph_def函数为mutable_graph_def(),修改saver_def()为mutable_saver_def()。误打误撞解决了问题,原理不理解,后续有时间还要分析tensorflow底层函数。
另一个要注意的问题是:
cv::Mat转换为tensorflow::Tensor时,要注意转换的数据格式和模型要求的数据格式一致。
两种转换方式:
tf::Tensor input_tensor(tf::DT_FLOAT, tf::TensorShape({ nPersonNum,112,112,1})); cv::Mat flaotMat; resizeImg.convertTo(flaotMat, CV_32FC1); tf::StringPiece tmp_data = input_tensor.tensor_data(); memcpy(const_cast<char*>(tmp_data.data()), (flaotMat.data), flaotMat.rows * flaotMat.cols * sizeof(tf::DT_FLOAT));另一种转换方式之灰度图:
//转换成的格式应该视自己模型的需要决定,此处转换为32F resizeImg.convertTo(flaotMat, CV_32FC1); for (int nTensor = 0; nTensor < nPersonNum; nTensor++) { const float * source_data = (float*)tmpVecWhitenImages[nTensor].data; for (int y = 0; y < tmpVecWhitenImages[nTensor].rows; ++y) { const float* source_row = source_data + (y * tmpVecWhitenImages[nTensor].cols); for (int x = 0; x < tmpVecWhitenImages[nTensor].cols; ++x) { const float* source_pixel = source_row + x; for (int c = 0; c < 1; ++c) { const float* source_value = source_pixel; input_tensor_mapped(nTensor, y, x, c) = *source_value; } } } }
另一种转换方式之RGB图像:
//转换成的格式应该视自己模型的需要决定,此处转换成64F matImgByTF.convertTo(Image2, CV_64FC1); tf::Tensor input_tensor(tf::DT_FLOAT, tf::TensorShape({ nPersonNum,m_pFaceDescriber->m_nFaceImgWidth,m_pFaceDescriber->m_nFaceImgHeight,m_pFaceDescriber->m_nFaceImgDepth })); auto input_tensor_mapped = input_tensor.tensor<float, 4>(); //double* for (int nTensor = 0; nTensor < nPersonNum; nTensor++) { const double * source_data = (double*)tmpVecWhitenImages[nTensor].data; for (int y = 0; y < tmpVecWhitenImages[nTensor].rows; ++y) { const double* source_row = source_data + (y * tmpVecWhitenImages[nTensor].cols * m_pFaceDescriber->m_nFaceImgDepth); for (int x = 0; x < tmpVecWhitenImages[nTensor].cols; ++x) { const double* source_pixel = source_row + (x * m_pFaceDescriber->m_nFaceImgDepth); for (int c = 0; c < m_pFaceDescriber->m_nFaceImgDepth; ++c) { const double* source_value = source_pixel + (2 - c);//RGB->BGR input_tensor_mapped(nTensor, y, x, c) = *source_value; } } } }
遗留的问题:
可以对meta和ckpt进行冻结得到.pb文件,但是自己冻结得到的模型运行结果和原始meta、ckpt的运行结果不一致,可能是冻结图的时候某些参数设置的不合理,此问题留待后续解决。
暂时草草记录,后面有时间详细整理,如有错误,欢迎批评指正。