2022-10-31How to convert the yolo model you trained into a tensorrt-related model (.trt or .engine)

How to convert the yolo model you trained into a tensorrt-related model


This article mainly uses the yolov3-tiny model I trained as an example to summarize and explain the method. The method also copied the summary of others on github. I write here mainly to write down some points that I think are worth noting and my own progress in the conversion process. The problems encountered also make it easier for you to use it later, otherwise you will have to forget everything. As expected, a good memory is not as good as a bad pen! !

【1】.pt model to .wts model

Note : In the process of converting a .pt model to a .wts model, the first thing to consider is whether to store the model you have trained as a .pt model that only saves weight parameters or a .pt model that saves both weight parameters and structure.
① For the .pt model that only saves the weight parameters,
if only the weight parameters are saved, the model can be converted at this time in conjunction with the .cfg file. The conversion method is as follows:
1) Clone a yolov3.git from github and unzip it.

git clone -b archive https://github.com/ultralytics/yolov3.git

2) Clone a tensorrtx.git from github and unzip it

git clone https://github.com/wang-xinyu/tensorrtx.git

3) Copy the yolov3-tiny model you have trained to the unzipped yolov3/weights folder, and copy the gen_wts.py file in the tensorrtx/yolov3-tiny folder to the yolov3 folder 4) In
yolov3 Run gen_wts.py in the folder to convert the .pt model into a .wts model

python3 gen_wts.py ./weights/XXX.pt

Attached here are the contents of gen_wts.py

import struct
import sys
from models import *
from utils.utils import *

model = Darknet('cfg/yolov3-tiny.cfg', (608, 608))
weights = sys.argv[1]
device = torch_utils.select_device('0')
if weights.endswith('.pt'):  # pytorch format
    model.load_state_dict(torch.load(weights, map_location=device)['model'])
else:  # darknet format
    load_darknet_weights(model, weights)
model = model.eval()

with open('yolov3-tiny.wts', 'w') as f:
    f.write('{}\n'.format(len(model.state_dict().keys())))
    for k, v in model.state_dict().items():
        vr = v.reshape(-1).cpu().numpy()
        f.write('{} {} '.format(k, len(vr)))
        for vv in vr:
            f.write(' ')
            f.write(struct.pack('>f',float(vv)).hex())
        f.write('\n')

Note : During the conversion process, the .cfg file should be replaced with the corresponding .cfg file trained by yourself. Also remember to modify the input image size to keep it consistent.
② For the .pt model that saves the weight parameters and structure,
for this model we You can convert it directly without using the .cfg file.
1) Clone a yolov3.git from github and unzip it.

git clone -b master https://github.com/ultralytics/yolov3.git

Note : The files downloaded here are the files of the master branch. Pay attention to distinguish the files of the archive branch downloaded in ①
2) Copy the yolov3-tiny model you have trained to the decompressed yolov3 folder
3) In the yolov3 folder Run gen_pt_wts.py to convert the .pt model into a .wts model

python3 gen_pt_wts.py -w ./XXX.pt

Attached here is gen_pt_wts.py (This file was not written by me, it was collected by me before. I can’t find the original link now. If the original blogger sees it, you can contact me directly. I will attach the reference link.)Content

import sys
import argparse
import os
import struct
import torch
from utils.torch_utils import select_device


def parse_args():
    parser = argparse.ArgumentParser(description='Convert .pt file to .wts')
    parser.add_argument('-w', '--weights', required=True, help='Input weights (.pt) file path (required)')
    parser.add_argument('-o', '--output', help='Output (.wts) file path (optional)')
    args = parser.parse_args()
    if not os.path.isfile(args.weights):
        raise SystemExit('Invalid input file')
    if not args.output:
        args.output = os.path.splitext(args.weights)[0] + '.wts'
    elif os.path.isdir(args.output):
        args.output = os.path.join(
            args.output,
            os.path.splitext(os.path.basename(args.weights))[0] + '.wts')
    return args.weights, args.output


pt_file, wts_file = parse_args()

# Initialize
device = select_device('cpu')
# Load model
model = torch.load(pt_file, map_location=device)['model']#.float()  # load to FP32
#print(model)
model.to(device).eval()

with open(wts_file, 'w') as f:
    f.write('{}\n'.format(len(model.state_dict().keys())))
    for k, v in model.state_dict().items():
        vr = v.reshape(-1).cpu().numpy()
        f.write('{} {} '.format(k, len(vr)))
        for vv in vr:
            f.write(' ')
            f.write(struct.pack('>f' ,float(vv)).hex())
        f.write('\n')

At this point, the model .pt can be converted into a .wts model, and you can proceed to the next step. Hehe

【2】.wts model to .trt/.engine model

Converting the .wts model to the .engine model is relatively simple. Here I will follow the steps in https://github.com/wang-xinyu/tensorrtx/tree/master/yolov3-tiny to describe in detail
1) Convert the converted .wts Copy the model to the tensorrtx/yolov3-tiny folder
2) Perform the following steps to basically complete the model conversion

cd ./tensorrtx/yolov3-tiny
mkdir build
cd build
cmake ..
make
sudo ./yolov3-tiny -s

Note :
(1) The structure of the model you train must be consistent with the official yolo structure . Do not modify the structure of the model during the training process, otherwise sometimes an error will not be reported when converted into a .wts file (for example, use the second step in [1] ) method), errors will also be reported during the process of converting to .engine. Generally, there will be a missing layer or an unnamed layer. This is mainly because the blogger's conversion program should follow the official structure, so be sure to be careful.
(2) Remember to modify the relevant parameters of the model during conversion . The contents and places of the main parameter modifications are as shown in the figure below:
Insert image description here
At this point, the model conversion is successful hahaha

Related instructions
The yolov3-tiny I trained myself refers to the model trained by the friends who work with me. It is mainly differentiated from the yolo model trained on the official website. The training process using my own data will be published after I have actually trained it. Method Description

The reference links are as follows:
[1] https://github.com/wang-xinyu/tensorrtx.git
[2] https://github.com/ultralytics/yolov3.git

Guess you like

Origin blog.csdn.net/LJ1120142576/article/details/127614612