Retinaface TensorRT Python/C++ deployment

B station video tutorial

https://www.bilibili.com/video/BV1Nv4y1K727/

Github repository

https://github.com/Monday-Leo/Retinaface_Tensorrt

Project Description

  • Accelerate Retinaface based on Tensorrt
  • Support Windows10, Linux
  • Support Python/C++

Environmental description

  • Tensorrt 8.2.1.8
  • Cuda 10.2 Cudnn 8.2.1 ( special attention needs to install two cuda10.2 patches )
  • Opencv 3.4.6
  • Cmake 3.17.1
  • VS 2017
  • GTX1650

Run the case (Windows)

Download Retinaface warehouse and this warehouse, weight file link https://pan.baidu.com/s/12nl4d_oKrj2aLXEKYcwxiQ extraction code: l7ls

git clone https://github.com/biubug6/Pytorch_Retinaface
git clone https://github.com/Monday-Leo/Retinaface_Tensorrt

Generate WTS model

Copy the gen_wts.py in the warehouse and the weight file just downloaded to the directory of Pytorch_Retinaface

parser = argparse.ArgumentParser(description='Retinaface')
parser.add_argument('-m', '--trained_model', default='./weights/mobilenet0.25_Final.pth',type=str)
parser.add_argument('--network', default='mobile0.25', help='mobile0.25 or resnet50')
args = parser.parse_args()

Modify the default parameter, specify the weight path and model type, mobile0.25 or resnet50

python gen_wts.py

Generate mobile0_25.wts or resnet50.wts model

Configure C++ dependencies

If Opencv and Tensorrt have been installed, you can skip the following steps.

Opencv configuration method

1. Download address

2. Run the downloaded executable file and decompress OpenCV to the specified directory, for exampleD:\projects\opencv

3. My Computer->Properties->Advanced System Settings->Environment Variables, find Path in the system variables (if not, create it yourself), and double-click to edit, fill in the opencv path and save it, such asD:\projects\opencv\build\x64\vc15\bin

Tensorrt configuration method

1. Download the version suitable for Windows platform from tensorrt official website. Download address

2. Copy all lib under TensorRT/lib to cuda/v10.2/lib/x64, copy all dlls under TensorRT/lib to cuda/v10.2/bin, copy all .h files under TensorRT/include Go to cuda/v10.2/include

3. My Computer->Properties->Advanced System Settings->Environment Variables, find Path in the system variables (if not, create it yourself), and double-click to edit, fill in and save the TensorRT/lib path, such asG:\c++\TensorRT-8.2.1.8\lib

Cmake

Open the CMakeLists.txt of this warehouse and modify the directories of Opencv and Tensorrt

set(OpenCV_DIR "G:\\c++\\paddle_test\\opencv\\build")
set(TRT_DIR "G:\\c++\\TensorRT-8.2.1.8")

Create a new build folder in this warehouse directory , open Cmake, select this warehouse directory and the newly created build directory, and then click the configure button on the lower left.

Select your own Visual Studio version, such as 2017, select x64 in the second box, and then click finish

It will automatically load CMakeLists.txt, add libraries, and run normally as follows

If a red warning appears, you need to modify the information in the box. For example, if the cuda directory is not found, you need to click the red box in the above figure, add your own cuda path , and then click configure. After everything is normal, click generate, and finally click open project.

compile

You can modify the size of the input image in decode.h . The larger the size, the more accurate the recognition, and the slower the speed. Generally, keep the default.

static const int INPUT_H = 480;
static const int INPUT_W = 640;

Change Debug to Release at the top of the interface, right-click the retina_mnet or retina_r50 project, and click Regenerate. After the compilation is successful, open build/Release, and you can see the generated exe executable file.

C++ run

Copy the wts model generated in the first step to the exe folder, and open the cmd input in this directory

retina_mnet -s

It is running normally. At this time, the program is converting wts to the engine serialization model, and it needs to wait for about 10-20 minutes . After the generation is complete, copy the pictures/test.jpg in the warehouse to the folder, and run the test

retina_mnet -d

Python deployment

Right-click the project, properties, modify the configuration type to DLL , and then click Generate, retina_mnet.dll will be generated under Release, and copy python_trt.py in the warehouse to the dll folder.

Set the model path, dll path and the image path you want to predict, pay special attention to the model path needs to add b''! !

det = Detector(model_path=b"./retina_mnet.engine",dll_path="./retina_mnet.dll")  # b'' is needed
img = cv2.imread("test.jpg")

Just run python_trt.py directly . The biggest advantage of python prediction is that it supports pictures in numpy format, which is very easy to integrate into the project.

References

https://github.com/wang-xinyu/tensorrtx

Guess you like

Origin blog.csdn.net/weixin_45747759/article/details/124534079