By referring to articles written by many experts on the Internet, I have avoided many pitfalls, so I will record them here.
Used the latest yolov5-6.0 training, opencv3 inference
The conversion from pt to onnx went smoothly, but when calling inference, an error was reported. After searching online, I found that it was a trap. Some experts said it was slice.
https://blog.csdn.net/nihate/article/details/112731327
Some experts said, "There is a layer in the network structure B, N, C = inputs.shape torch.zeros(B, C, dtype=torch.long, device=device)
that generates an all-zero tensor of the same dimension based on the existing dimensions B and C. When cv.dnn is called, shape returns tensor, which should have been passed during generation. Entering int will cause an error here."
https://www.cnblogs.com/xiaxuexiaoab/p/15654972.html
Some bosses said to directly upgrade the opencv version to 4.5+
https://blog.csdn.net/sinat_38685124/article/details/119969668
Finally, I also found this problem on stackoverflow, saying that the input blob type needs to be changed.
https://blog.csdn.net/sinat_38685124/article/details/119969668
I've tried many myself:
(1) First, I changed the type of blob, but found that there was a bug when reading the model, which did not involve the subsequent feeding data at all.
(2) Later I tried to change the opencv library and replaced all CV_32S with CV_32F, but it didn't work.
(3) Then try to change the network structure and remove or replace the slicing operation and the upsampling operation. It did not succeed. There may be something wrong.
(4) Finally, opencv4.5.5 was replaced to call onnx, and the inference was successful.