yolov5 onnx model to rknn model

1. Convert to rknn model environment construction

The onnx model needs to be converted to the rknn model to run on the rv1126 development board, so the conversion environment needs to be built first

Model conversion tool Model conversion related file download:

Netdisk download link: Baidu Netdisk, please enter the extraction code   Extraction code: teuc

Move it to the virtual machine, find the docker file in the folder rknn-toolkit-1.7.1-docker.tar.gz, model_convert folder

Load the model conversion tool docker image

docker load --input /home/developer/rknn-toolkit/rknn-toolkit-1.7.1-docker.tar.gz

Enter the mirrored bash environment

 Execute the following command to map the working area into the docker image, where /home/developer/rknn-toolkit/model_convert is the working area , /test is mapped to the docker image , /dev/bus/usb:/dev/bus/usb is the mapped usb to the docker image:

docker run -t -i --privileged -v /dev/bus/usb:/dev/bus/usb -v /home/developer/rknn-toolkit/model_convert:/test rknn-toolkit:1.7.1 /bin/bash

Two file mappings, ie synchronization

 2. Generate a quantified image list

This step is to take some prepared pictures and generate a text file of the picture path, which is useful when building the RKNN model. By using real sample data sets, RKNN tools can better understand and model the input data of the model, so as to better optimize the network structure, weights and quantization schemes.

Switch to the model conversion working directory in the docker environment; execute gen_list.py, and you will get a text file pic_path.txt, which contains the path of the picture:

cd /test/coco_object_detect
python gen_list.py

The content of gen_list.py is as follows:

import os
import random


def main(image_dir):
	save_image_txt = './pic_path.txt'
	save_val_number = 0

	img_path_list = []

	image_list = os.listdir(image_dir)
	for i in image_list:
		#if os.path.isdir(image_dir):
		#print("i:", i)
		image_path = image_dir + '/' + i
		#print("image_path:", image_path)
		img_path_list.append(image_path)

	#print(img_path_list)

	
	print('len of all', len(img_path_list))

	random.shuffle(img_path_list)

	with open(save_image_txt, 'w') as F:
		for i in range(len(img_path_list)):
		    F.write(img_path_list[i]+'\n')


if __name__ == '__main__':
    image_dir = '/test/quant_dataset/coco_data'  # 图片所在路径,大概500张
    main(image_dir)

3. Convert the onnx model to the rknn model

Or convert the working directory in the docker environment model, run rknn_convert.py

python rknn_convert.py

If this step is run on a virtual machine, the 8GB memory stick win10 system also needs to be used. There is not much allocated to the virtual machine, and 3GB is not enough to perform this step.

Later, I directly performed this step on the Ubuntu system, and the 8GB system used up a little and there was 6.7GB left, and the CPU and memory were directly full.

 

rknn_convert.py source code:

import os
import urllib
import traceback
import time
import sys
import numpy as np
import cv2
from rknn.api import RKNN


ONNX_MODEL = 'best.onnx' # onnx 模型的路径
RKNN_MODEL = './yolov5_mask_rv1126.rknn'  # 转换后的 RKNN 模型保存路径
DATASET = './pic_path.txt'   # 数据集文件路径

QUANTIZE_ON = True   # 是否进行量化

if __name__ == '__main__':

	# 创建 RKNN 对象
	rknn = RKNN(verbose=True)

    # 检查 ONNX 模型文件是否存在
	if not os.path.exists(ONNX_MODEL):
		print('model not exist')
		exit(-1)

	# 配置模型预处理参数
	print('--> Config model')
	rknn.config(reorder_channel='0 1 2', # 表示 RGB 通道
			    mean_values=[[0, 0, 0]], # 每个通道的像素均值,预处理时对应通道减去该值
			    std_values=[[255, 255, 255]], # 每个通道的像素标准差,每个通道除以该值
			    optimization_level=3, # 优化级别
			    target_platform = 'rv1126', #指定目标平台为rv1126
			    output_optimize=1,      # 输出优化为真
			    quantize_input_node=QUANTIZE_ON)  # 对时输入节点进行量化
	print('done')

	# 加载 ONNX 模型
	print('--> Loading model')
	ret = rknn.load_onnx(model=ONNX_MODEL)
	if ret != 0:
		print('Load yolov5 failed!')
		exit(ret)
	print('done')

	# 构建模型
	print('--> Building model')
	ret = rknn.build(do_quantization=QUANTIZE_ON, dataset=DATASET)
	if ret != 0:
		print('Build yolov5 failed!')
		exit(ret)
	print('done')

	# 导出 RKNN 模型
	print('--> Export RKNN model')
	ret = rknn.export_rknn(RKNN_MODEL)
	if ret != 0:
		print('Export yolov5rknn failed!')
		exit(ret)
	print('done')

Guess you like

Origin blog.csdn.net/weixin_45824067/article/details/131927001