YOLOV8 deploys AndroidStudio

        ​ ​ Recently I am learning how to deploy yolo projects to mobile Android phones, and I would like to share my learning. Many problems were encountered during the deployment process. Among them, the environment configuration of Android Studio was the most time-consuming. After some twists and turns, there was no clear picture. The final deployment effect was not good. I don’t know where the problem occurred in my process. I hope With some advice from the boss, the following is my deployment process.​ 

1. Download the yolov8 project source code from github

https://github.com/ultralytics/ultralyticsicon-default.png?t=N7T8https://github.com/ultralytics/ultralytics1.1 Create one belonging to yolov8 Virtual environment

Reference:[Deep Learning-YOLO8] Environment Deployment_Chun Ma and Xia’s Blog-CSDN Blogicon-default.png?t=N7T8https://blog.csdn .net/qq_43376286/article/details/131838647Use pip install ultralytics directly to install all the packages required for the project

1.2 Download official pre-training weightshttps://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pticon-default.png?t=N7T8https://github.com/ultralytics/assets/releases/download/v0.0.0/yolov8n.pt1.3 yolov8 trains its own data set

1.3.1 Create data loading configuration file

1.3.2 Training customized data

yolo task=detect mode=train model=yolov8n.pt data=path to data configuration file batch=16 epochs=100 imgsz=640 workers=16 device=0

After completing the training, the best weight file will be generated in the weights directory of runs.

2. Model conversion

If you want to deploy the model you trained on the mobile terminal, you need to convert the generated pt file into the ncnn format file supported by android. The conversion process is divided into the following two steps:

2.1 Convert pt file to onnx

2.1.1 Modificationultralytics/ultralytics/nn/modules/block.py中的class C2f(nn.Module) is as follows:

2.1.2 Modificationultralytics/ultralytics/nn/modules/head.py中的class Detect(nn.Module) changes as follows:

2.1.3 Create and run the pt-to-onnx file

If the operation is successful, the model file in the onnx form corresponding to best.pt will be generated.

2.1 Convert onnx files to ncnn format

Visit the one-click generation website:Convert Caffe, ONNX, TensorFlow to NCNN, MNN, Tengine with one clickicon-default.png?t=N7T8https://convertmodel.com /

 Select the best.onnx file generated above and check to generate fp16 model

 

After conversion, the following two files will be generated:

 

3. Prepare android project

GitHub - FeiGeChuanShu/ncnn-android-yolov8: Real time yolov8 Android demo by ncnnicon-default.png?t=N7T8https://github.com/FeiGeChuanShu/ncnn-android-yolov8

3.1 Place ncnn model file

3.2 Modify yolo.cpp

3.2.1 Modify the called model name format

3.2.2 Modify the input and output layer names of the model

3.2.3 Modify it to your own category name as follows

3.2.4 Number of categories 

3.3 Modify yolov8ncnn.cpp

3.3.1 Add model

3.3.2 Add the following:

3.4 strings.xml

Add mobile model selection file

After the modification is completed, we can connect to the real machine and view the deployment effect of the model.

Deployment effect

My deployment process is based on the process of this person at station b:yolov8 deploys Android ncnn. The whole process will be done in one shot. It will definitely work_bilibili_bilibili< /span>icon-default.png?t=N7T8I don’t know. What went wrong and the result of my final deployment was like thishttps://www.bilibili.com/video/BV1du411577U/?spm_id_from=333.337.search-card.all.click&vd_source=f37acd6c5a247f45905f51875e5d19e7

I've been stuck here, hoping someone can give me some advice.

Guess you like

Origin blog.csdn.net/weixin_50585794/article/details/132823525