tensorrt jetson c++ yolov7 reasoning

Refer to the source code and modify it as follows: If you modify the source code cpp/norm/yolo.hpp to your own training data, modify it as follows:
class YOLO{ const char* INPUT_BLOB_NAME = "images"; const char* OUTPUT_BLOB_NAME = "output"; }According to yourself Convert the onnx model and use netron to open and view the input and output node names. In the static void generate_yolo_proposals() function, line 119 const int num_class = 80 Modify line 199 const int num_class = [80][3] 80 needs to be modified to the number of your own categories , keep the array corresponding to the number of categories in the list 284 lines static void draw_objects () class_names[] ={} Modify according to your own model category name Summary : Use TensorRT to infer YOLOv7-tiny (C++) The project uses the YOLOv7-tiny model in the development process, and uses C++ to deploy. I will summarize the pits I have stepped on. Welcome to discuss with me. TensorRT is a high-performance deep learning inference SDK from nvidia. This SDK contains deep learning inference... https://zhuanlan.zhihu.com/p/580268047?utm_id=0





Guess you like

Origin blog.csdn.net/weixin_42064949/article/details/130928233