CenterFace model to TensorRT

CenterFace model to TensorRT

1. github open source code

The open source code location of CenterFaceTensorRT inference is at https://github.com/linghu8812/tensorrt_inference/tree/master/CenterFace , the author’s open source code location is at https://github.com/Star-Clouds/CenterFace , and the arxiv address of the paper is https://arxiv.org/abs/1911.03599 .

2. Rewrite ONNX model

The author put two open source models on github. When converting TensorRT, it can be seen from the log that these two models were converted through PyTorch. The author did not open source the code for building the model through PyTorch, nor did he open source the PyTorch version of the model. Through netron, you can see that the resolution of the author's open source ONNX model is 32 × 32 32 \times 3232×. 3 2 because the input TensorRT inference engine is fixed, it is necessary to modify the input and output size ONNX model. As shown in the figure below, the resolution of input and output is modified to640 × 640 640\times 640640×6 4 0 and160 × 160 160\times 160160×160

python3 export_onnx.py

Figure 1 Input size
Figure 2 Output size

3. Convert ONNX model to TensorRT model

3.1 Overview

The TensorRT model is the reasoning engine of TensorRT, and the code is implemented in C++. The relevant configuration is written in the config.yaml file, if engine_filethe path exists , it will be read engine_file, otherwise it will be onnx_filegenerated engine_file.

void CenterFace::LoadEngine() {
    // create and load engine
    std::fstream existEngine;
    existEngine.open(engine_file, std::ios::in);
    if (existEngine) {
        readTrtFile(engine_file, engine);
        assert(engine != nullptr);
    } else {
        onnxToTRTModel(onnx_file, engine_file, engine, BATCH_SIZE);
        assert(engine != nullptr);
    }
}

The config.yaml file only needs to set the batch size of the inference, the size of the image and the threshold of the detection. This is the advantage of the anchor free model compared to the model that requires an anchor for regression.

CenterFace:
  onnx_file:     "../centerface.onnx"
  engine_file:   "../centerface.trt"
  BATCH_SIZE:    1
  INPUT_CHANNEL: 3
  IMAGE_WIDTH:   640
  IMAGE_HEIGHT:  640
  obj_threshold: 0.5
  nms_threshold: 0.45

When converting the original data of the image into a tensor, the aspect ratio of the image needs to be maintained. The corresponding code is as follows

float ratio = float(IMAGE_WIDTH) / float(src_img.cols) < float(IMAGE_HEIGHT) / float(src_img.rows) ? float(IMAGE_WIDTH) / float(src_img.cols) : float(IMAGE_HEIGHT) / float(src_img.rows);
cv::Mat flt_img = cv::Mat::zeros(cv::Size(IMAGE_WIDTH, IMAGE_HEIGHT), CV_8UC3);
cv::Mat rsz_img;
cv::resize(src_img, rsz_img, cv::Size(), ratio, ratio);
rsz_img.copyTo(flt_img(cv::Rect(0, 0, rsz_img.cols, rsz_img.rows)));
flt_img.convertTo(flt_img, CV_32FC3);

3.2 Compile

Compile the project with the following command to generateyolov5_trt

mkdir build && cd build
cmake ..
make -j

3.3 Operation

Run the project through the following command to get the inference result

./CenterFace_trt ../config.yaml ../samples

4. Inference results

The inference result is shown in the figure below:
Inference result

Guess you like

Origin blog.csdn.net/linghu8812/article/details/109549702