- Required:
- RK3588 with Ubuntu20 system installed
- A computer or virtual machine with Ubuntu 18 installed
Anaconda Tutorial
YOLOv5 Tutorial
After the above two tutorials, you should have obtained your own best.pt
files
- Change the function under the class
models/yolo.py
in the file by:class
forward
def forward(self, x):
z = [] # inference output
for i in range(self.nl):
x[i] = self.m[i](x[i]) # conv
bs, _, ny, nx = x[i].shape # x(bs,255,20,20) to x(bs,3,20,20,85)
x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
if not self.training: # inference
if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
if isinstance(self, Segment): # (boxes + masks)
xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4)
xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i] # xy
wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i] # wh
y = torch.cat((xy, wh, conf.sigmoid(), mask), 4)
else: # Detect (boxes only)
xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4)
xy = (xy * 2 + self.grid[i]) * self.stride[i] # xy
wh = (wh * 2) ** 2 * self.anchor_grid[i] # wh
y = torch.cat((xy, wh, conf), 4)
z.append(y.view(bs, self.na * nx * ny, self.no))
return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)
Change to:
def forward(self, x):
z = [] # inference output
for i in range(self.nl):
x[i] = self.m[i](x[i]) # conv
return x
- Put the statement under the function
export.py
in the file :run
shape = tuple((y[0] if isinstance(y, tuple) else y).shape) # model output shape
Change to:
shape = tuple((y[0] if isinstance(y, tuple) else y)) # model output shape
run/train/
Move the files in the directory corresponding to your training modelexp/weighst/best.pt
toexport.py
the same directory- Ensure that the working directory is located in the yolov5 main folder, and execute the statement in the console:
cd yolov5
python export.py --weights best.pt --img 640 --batch 1 --include onnx --opset 12
best.onnx
Then a file appeared under the main folder , check whether the model is correct in Netron- Click on the upper left menu -> Properties...
- Check
OUTPUTS
if there are three output nodes on the right, if yes, the ONNX model conversion is successful. - If the converted
best.onnx
model does not have three output nodes, there is no need to try the next step, and various errors will be reported.
-
I am using a system
VMWare
installed on a virtual machineUbuntu18.04
. Note that it is not onRK3588
the Internet, but this step is performed on your computer or virtual machine. -
rknn-toolkit2-1.4.0
The requiredpython
version is3.6
so needs to be installedMiniconda
to help manage. -
Install
Miniconda for Linux
- Go to the downloaded
Miniconda3-latest-Linux-x86_64.sh
directorychmod +x Miniconda3-latest-Linux-x86_64.sh ./Miniconda3-latest-Linux-x86_64.sh
- Prompt everything has been agreed until the installation is complete.
- After the installation is successful, reopen the terminal.
- If the installation is successful, there should be a
(base)
- If the installation fails, refer to other
Miniconda3
installation tutorials. - Create a virtual environment:
conda create -n rknn3.6 python=3.6
- Activate the virtual environment:
conda activate rknn3.6
- When the activation is successful, there should be a
(rknn3.6)
- Go to the downloaded
-
download
rknn-toolkit2-1.4.0
- To Ubuntu, download
源代码
theRK356X/RK3588 RKNN SDK
- Enter Baidu Netdisk:
RKNN_SDK-> RK_NPU_SDK_1.4.0
Downloadrknn-toolkit2-1.4.0
- After downloading to Ubuntu, enter
rknn-toolkit2-1.4.0
the directorypip install packages/rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl
- Wait for the installation to complete and check whether the installation is successful:
python from rknn.api import RKNN
- Success if no errors are reported.
- If an error is reported:
- 1. Whether it is
rknn3.6
in a virtual environment; - 2.
pip install packages/rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl
Whether to report an error; - 3.
pip install
When an error is reported, it is prompted to usepip install
orsudo apt-get install
install what is missing;
- 1. Whether it is
- To Ubuntu, download
-
The above requirements are all installed and verified successfully, then start the next step.
-
Convert
best.onnx
model tobest.rknn
model- Enter the conversion directory:
cd examples/onnx/yolov5
- It is best to make a copy
test.py
and modify it:cp test.py ./mytest.py
- Modify the file defined at the beginning, this is what I modified:
ONNX_MODEL = 'best.onnx' #待转换的onnx模型 RKNN_MODEL = 'best.rknn' #转换后的rknn模型 IMG_PATH = './1.jpg' #用于测试图片 DATASET = './dataset.txt' #用于测试的数据集,内容为多个测试图片的名字 QUANTIZE_ON = True #不修改 OBJ_THRESH = 0.25 #不修改 NMS_THRESH = 0.45 #不修改 IMG_SIZE = 640 #不修改 CLASSES = ("person") #修改为你所训练的模型所含的标签
if __name__ == '__main__':
The statement in will :rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]])
- change into
rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')
- If you want the program to finish executing and show the reasoning effect, put the following statement:
# cv2.imshow("post process result", img_1) # cv2.waitKey(0) # cv2.destroyAllWindows()
- Comments open:
cv2.imshow("post process result", img_1) cv2.waitKey(0) cv2.destroyAllWindows()
- Terminal executes:
python mytest.py
- After running the display effect and appearing in the folder,
best.rknn
the step is successful.
- Enter the conversion directory:
- To install on the system
RKNN3588
, it should be noted that the system is based on the architecture, so the downloaded version is different from the previous version, and the corresponding version needs to be selected.Ubuntu20
Miniconda
RKNN3588
Ubuntu20
aarch
Miniconda
aarch
aarchMiniconda下载
- Installation will not be repeated.
- Create a virtual environment, because
RK3588
it needs to be used on the Internet,rknn-toolkit-lite2
so it needs to be installedpython3.7
:- conda create -n rknnlite3.7 python=3.7
- conda activate rknnlite3.7
- Download
rknn-toolkit-lite2
toRK3588
, that is, downloadrknn-toolkit2-1.4.0
, no more details. - Install
rknn-toolkit-lite2
- enter
rknn-toolkit2-1.4.0/rknn-toolkit-lite2
directorypip install packages/rknn_toolkit_lite2-1.4.0-cp37-cp37m-linux_aarch64.whl
- Wait for the installation to complete
- Test whether the installation is successful:
python from rknnlite.api import RKNNLite
- success without error
- enter
- Create a new folder under
example
the foldertest
- Put your successfully converted
best.rknn
model and the filesgithub
under the warehouse at the beginning of the article in itdetect.py
detect.py
The places that need to be modified in the file:- definition
RKNN_MODEL = 'best.rknn' #你的模型名称 IMG_PATH = './1.jpg' #测试图片名 CLASSES = ("cap") #标签名
if __name__ == '__main__':
:capture = cv2.VideoCapture(11) #其中的数字为你Webcam的设备编号
- Regarding the device number, run in terminal:
v4l2-ctl --list-devices
- The 11 corresponding to the printed
Cam
words/dev/video11
is your device number.
- Regarding the device number, run in terminal:
- definition
- Run the script:
python detect.py
- The deployment is complete.