yolov7 ncnn Android deployment (pt->onnx->ncnn)

0 - Preface

There are many deployments of yolov5 on Android on the Internet, and there are relatively few deployments of yolov7. Because you need to replace your own model, record the pits you stepped on.

Process: pt->onnx->ncnn.

1-yolov7(pt->onnx)

Download the code and weights, here is tiny.

The code uses the main branch, and the weight is manually downloaded from the release.

(PS: If the error message 'Command 'git tag' returned non-zero exit status 128.' appears during the test, download the weight manually, reference 4)

run

python export.py yolov7-tiny.pt --simplify

You need to add simplify when running, otherwise there will be unsupported operators when converting from onnx to ncnn.

2-ncnn(onnx->ncnn)

Of course, this part can compile the code of protobuf and ncnn, and Baidu by itself, but it is too troublesome.

It is very conscientious to recommend a website that can be converted directly.

https://convertmodel.com/

Check all three checkboxes.

insert image description here

3- Android Deployment

Download code: https://github.com/xiang-wuu/ncnn-android-yolov7

Download ncnn-YYYYMMDD-android-vulkan.zip and opencv-mobile-XYZ-android.zip according to the requirements of the readme.

I am using ncnn-20210525-android-vulkan and opencv-mobile-4.5.1-android.

Open the Android Studio startup project, the sdk path should be available, but the ndk path of the camera may not be found, modify the ndk.dir in the local.properties file, you can download it in AS, or manually download the installation package, r21e Corresponding to 21.4.7075529, just modify the path.

4- compile

It can be run after the compilation is successful, and the packaging of the apk is very simple.

5. Use your own model

The yolov7 weight output used by this code is a bit different.

This is the author's model

insert image description here

Here is my model

insert image description here

His post-processing is after the convolutional layer, and the official weights are followed by reshape and permute after the three output heads.

At the beginning, I followed the example of ncnn official code yolov7, and took the output after permute, but the result crashed and there was no frame.

Their post-processing generate_proposals is somewhat different. The easiest way is to also take the output of the convolution.

在ncnn-android-yolov7/app/src/main/jni/yolo.cpp的Yolo::detect修改:
in0->images
out0->259
out1->279
out2->299

Other parts don't need to be changed, just recompile and run.

(PS: I compared his model to fp16, but it seems that opt.use_fp16_arithmetic = true is not set in the code, it is not clear yet)

References (in no particular order)

  1. https://github.com/WongKinYiu/yolov7

  2. https://github.com/xiang-wuu/ncnn-android-yolov7

  3. https://convertmodel.com/

  4. https://blog.csdn.net/m0_50837237/article/details/126055947

Guess you like

Origin blog.csdn.net/qq_43268106/article/details/127139216