[yolov5] pytorch model is exported as onnx model

The blogger wants to use the yolov5 of the official website to train the pt model, then convert it into the rknn model, and then call the model detection on the Rockchip development board. However, the version of the official website is not friendly to npu, so the version with improved structure is adopted:

  1. Change the Focus layer to a Conv layer
  2. Change the Swish activation function to the Relu activation function

The built-in pre-training model is an improved yolov5s structure for predicting 80 types of CoCo datasets. Let's take you to convert the model together!

1. First deploy the yolov5 environment, ensure that detect.py can be run for detection, and put the pt model trained by yourself in the weights directory. I name it best.pt here.

insert image description here

2. pip install onnxInstall the onnx library

insert image description here
3. Enter the following command to export the model (the weights, img and batch parameters can be omitted, and the default parameters can be set)

python models/export.py --weights ./weights/best.pt --img 640 --batch 1

Then the network layers will be decomposed, the number of model layers and parameters will be previewed, and finally saved as ./weights/best.onnx.

(yolov5) dzh@dzh-Lenovo-Legion-Y7000:~/airockchip-yolov5$ python models/export.py --weights ./weights/best.pt --img 640 --batch 1
Namespace(batch_size=1, img_size=[640, 640], weights='./weights/best.pt')
Fusing layers... 
Model Summary: 140 layers, 7.2627e+06 parameters, 0 gradients

Starting TorchScript export with torch 1.12.1+cu102...
/home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/jit/_trace.py:967: TracerWarning: Encountering a list at the output of the tracer might cause the trace to be incorrect, this is only valid if the container structure does not change based on the module's inputs. Consider using a constant container instead (e.g. for `list`, use a `tuple` instead. for `dict`, use a `NamedTuple` instead). If you absolutely need this and know the side effects, pass strict=False to trace() to allow this behavior.
  module._c._create_method_from_trace(
TorchScript export success, saved as ./weights/best.torchscript.pt

Starting ONNX export with onnx 1.12.0...
/home/dzh/airockchip-yolov5/./models/yolo.py:103: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if augment:
/home/dzh/airockchip-yolov5/./models/yolo.py:128: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if profile:
/home/dzh/airockchip-yolov5/./models/yolo.py:143: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
  if profile:
/home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages/torch/onnx/symbolic_helper.py:621: UserWarning: You are trying to export the model with onnx:Resize for ONNX opset version 10. This operator might cause results to not match the expected results by PyTorch.
ONNX's Upsample/Resize operator did not match Pytorch's Interpolation until opset 11. Attributes to determine how to transform the input were added in onnx:Resize in opset 11 to support Pytorch's behavior (like coordinate_transformation_mode and nearest_mode).
We recommend using opset 11 and above for models using this operator.
  warnings.warn(
ONNX export success, saved as ./weights/best.onnx

Export complete (4.98s). Visualize with https://github.com/lutzroeder/netron.
(yolov5) dzh@dzh-Lenovo-Legion-Y7000:~/airockchip-yolov5$ 

4. The exported onnx model can use the Netron website to view the network structure:

insert image description here
It can be seen that the converted structure is messy, and some nodes can actually be skipped to reduce the model. The tools to modify the onnx model can be used here , and the instructions are all in the Readme.

insert image description here

Here we use the deep learning library onnx-simplifier, which can be installed through pip.

Collecting onnx-simplifier
  Downloading onnx_simplifier-0.4.8-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 203.7 kB/s eta 0:00:00
Requirement already satisfied: onnx in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnx-simplifier) (1.12.0)
Collecting rich
  Downloading rich-12.6.0-py3-none-any.whl (237 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 237.5/237.5 kB 191.1 kB/s eta 0:00:00
Requirement already satisfied: numpy>=1.16.6 in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnx->onnx-simplifier) (1.23.3)
Requirement already satisfied: protobuf<=3.20.1,>=3.12.2 in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnx->onnx-simplifier) (3.19.6)
Requirement already satisfied: typing-extensions>=3.6.2.1 in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnx->onnx-simplifier) (4.4.0)
Collecting commonmark<0.10.0,>=0.9.0
  Downloading commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 51.1/51.1 kB 215.0 kB/s eta 0:00:00
Collecting pygments<3.0.0,>=2.6.0
  Downloading Pygments-2.13.0-py3-none-any.whl (1.1 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.1/1.1 MB 142.8 kB/s eta 0:00:00
Installing collected packages: commonmark, pygments, rich, onnx-simplifier
Successfully installed commonmark-0.9.1 onnx-simplifier-0.4.8 pygments-2.13.0 rich-12.6.0
(yolov5) dzh@dzh-Lenovo-Legion-Y7000:~/airockchip-yolov5$ python -m onnxsim ./weights/red.onnx ./weights/red2.onnx
Installing onnxruntime by `/home/dzh/anaconda3/envs/yolov5/bin/python -m pip install --user onnxruntime`, 
please wait for a moment..
Collecting onnxruntime
  Downloading onnxruntime-1.12.1-cp38-cp38-manylinux_2_27_x86_64.whl (4.9 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.9/4.9 MB 52.8 kB/s eta 0:00:00
Collecting coloredlogs
  Downloading coloredlogs-15.0.1-py2.py3-none-any.whl (46 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 46.0/46.0 kB 74.5 kB/s eta 0:00:00
Requirement already satisfied: protobuf in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnxruntime) (3.19.6)
Collecting sympy
  Downloading sympy-1.11.1-py3-none-any.whl (6.5 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.5/6.5 MB 72.3 kB/s eta 0:00:00
Requirement already satisfied: packaging in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnxruntime) (21.3)
Requirement already satisfied: numpy>=1.21.0 in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from onnxruntime) (1.23.3)
Collecting flatbuffers
  Downloading flatbuffers-22.9.24-py2.py3-none-any.whl (26 kB)
Collecting humanfriendly>=9.1
  Downloading humanfriendly-10.0-py2.py3-none-any.whl (86 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 86.8/86.8 kB 129.3 kB/s eta 0:00:00
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/dzh/anaconda3/envs/yolov5/lib/python3.8/site-packages (from packaging->onnxruntime) (3.0.9)
Collecting mpmath>=0.19
  Downloading mpmath-1.2.1-py3-none-any.whl (532 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 532.6/532.6 kB 128.3 kB/s eta 0:00:00
Installing collected packages: mpmath, flatbuffers, sympy, humanfriendly, coloredlogs, onnxruntime

Then enter the python -m onnxsim ./weights/red.onnx ./weights/red2.onnxoptimized onnx model. You can see the table before and after optimization.

insert image description here


Summary of possible problems:

ONNX export failure: No module named ‘onnx’
Reason: The module cannot be found because the onnx library is not installed
RuntimeError: Given groups=1, weight of size [32, 12, 3, 3], expected input[1, 3, 640, 640] to have 12 channels, but got 3 channels instead
The image channel expected by the model is 12, but now the input is an image with 3 channels, so change the number of channels required by the model before the image input

Guess you like

Origin blog.csdn.net/qq_42257666/article/details/127244265