I didn't follow the official documentation, and I complained that the official documentation is a bit confusing. .
I. Overview
To sum up, it is to use c++ sample code and use a model for reasoning.
2. Sample code download
https://www.paddlepaddle.org.cn/paddle/paddleinference
https://github.com/PaddlePaddle/Paddle-Inference-Demo
I downloaded it to the disk and decompressed it, as shown below:
3. Reasoning library download
4. Example model
Click on ResNet50 in the image above to download the model.
5. Organize folders
Copy the prediction library paddle_inference directory (if the decompressed directory name is different, it also needs to be renamed paddle_inference ) to the Paddle-Inference-Demo/c++/lib directory
Model directory resnet50 directory, copy to Paddle-Inference-Demo/c++/cpu/resnet50 directory
CMakeLists.txt under Paddle-Inference-Demo-master\c++\lib, copy to Paddle-Inference-Demo-master\c++\cpu\resnet50
It is possible, I mean it is possible, the include and lib cannot be found, and all the things in the paddle_inference_install_dir folder need to be moved to the same level as him.
Because, you look at the project properties of vs. After cmake below, it is wrong to add a library directory to your generated project.
Note that the "paddle_inference_install_dir" folder is gone recently
6. CMAKE
The build folder is my new one.
DEMO_NAME和 PADDLE_LIB是我增加的。
cmake后,会得到
进入vs,在release下,生成可执行文件:
七、测试一下
你缺很多dll,甚至可能导致程序崩溃,却不提示你缺dll
这下面,dll很多,尽量多弄过来!
像我这样,如下图
我干脆把神经网络模型,拷贝到Release目录下了
我执行
resnet50_test --model_file resnet50\inference.pdmodel --params_file resnet50\inference.pdiparams
八、可能缺tensorrt
如果你搞的是GPU的部署,则需要tensorrt相关库。
否则会在执行的时候,命令行里有tensorrt的警告。
就去这个网站下载吧。
https://developer.nvidia.com/nvidia-tensorrt-download
这里我下载的是 TensorRT-8.2.1.8.Windows10.x86_64.cuda-11.4.cudnn8.2.zip,解压后,bin 目录加入到系统环境变量 PATH 中
同时将 lib 文件夹下的*.dll和*.lib文件分别拷贝到 cuda 安装目录下的 bin 文件夹与lib/x64下,我这里是
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7\lib\x64
操作如下图: