Deployment of yolov5 model (.pt) on RK3588(S) (real-time camera detection)

github repository

  • Required:
    • RK3588 with Ubuntu20 system installed
    • A computer or virtual machine with Ubuntu 18 installed
1. yolov5 PT model acquisition

Anaconda Tutorial
YOLOv5 Tutorial
After the above two tutorials, you should have obtained your own best.ptfiles

2. PT model to onnx model
  • Change the function under the class models/yolo.pyin the file by:classforward
def forward(self, x):
    z = []  # inference output
    for i in range(self.nl):
        x[i] = self.m[i](x[i])  # conv
        bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
        x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
        if not self.training:  # inference
            if self.dynamic or self.grid[i].shape[2:4] != x[i].shape[2:4]:
                self.grid[i], self.anchor_grid[i] = self._make_grid(nx, ny, i)
            if isinstance(self, Segment):  # (boxes + masks)
                xy, wh, conf, mask = x[i].split((2, 2, self.nc + 1, self.no - self.nc - 5), 4)
                xy = (xy.sigmoid() * 2 + self.grid[i]) * self.stride[i]  # xy
                wh = (wh.sigmoid() * 2) ** 2 * self.anchor_grid[i]  # wh
                y = torch.cat((xy, wh, conf.sigmoid(), mask), 4)
            else:  # Detect (boxes only)
                xy, wh, conf = x[i].sigmoid().split((2, 2, self.nc + 1), 4)
                xy = (xy * 2 + self.grid[i]) * self.stride[i]  # xy
                wh = (wh * 2) ** 2 * self.anchor_grid[i]  # wh
                y = torch.cat((xy, wh, conf), 4)
            z.append(y.view(bs, self.na * nx * ny, self.no))
    return x if self.training else (torch.cat(z, 1),) if self.export else (torch.cat(z, 1), x)

Change to:

def forward(self, x):
    z = []  # inference output
    for i in range(self.nl):
        x[i] = self.m[i](x[i])  # conv
    return x
  • Put the statement under the function export.pyin the file :run
shape = tuple((y[0] if isinstance(y, tuple) else y).shape)  # model output shape

Change to:

shape = tuple((y[0] if isinstance(y, tuple) else y))  # model output shape
  • run/train/Move the files in the directory corresponding to your training model exp/weighst/best.ptto export.pythe same directory
  • Ensure that the working directory is located in the yolov5 main folder, and execute the statement in the console:
cd yolov5 
python export.py --weights best.pt --img 640 --batch 1 --include onnx --opset 12
  • best.onnxThen a file appeared under the main folder , check whether the model is correct in Netron
  • Click on the upper left menu -> Properties...
  • Check OUTPUTSif there are three output nodes on the right, if yes, the ONNX model conversion is successful.
  • If the converted best.onnxmodel does not have three output nodes, there is no need to try the next step, and various errors will be reported.
3. Onnx model to rknn model
  • I am using a system VMWareinstalled on a virtual machine Ubuntu18.04. Note that it is not on RK3588the Internet, but this step is performed on your computer or virtual machine.

  • rknn-toolkit2-1.4.0The required pythonversion is 3.6so needs to be installed Minicondato help manage.

  • InstallMiniconda for Linux

    • Go to the downloaded Miniconda3-latest-Linux-x86_64.shdirectory
      chmod +x Miniconda3-latest-Linux-x86_64.sh
      ./Miniconda3-latest-Linux-x86_64.sh
      
    • Prompt everything has been agreed until the installation is complete.
    • After the installation is successful, reopen the terminal.
    • If the installation is successful, there should be a(base)
    • If the installation fails, refer to other Miniconda3installation tutorials.
    • Create a virtual environment:
      conda create -n rknn3.6 python=3.6 
      
    • Activate the virtual environment:
      conda activate rknn3.6
      
    • When the activation is successful, there should be a(rknn3.6)
  • downloadrknn-toolkit2-1.4.0

    • To Ubuntu, download 源代码theRK356X/RK3588 RKNN SDK
    • Enter Baidu Netdisk: RKNN_SDK-> RK_NPU_SDK_1.4.0Downloadrknn-toolkit2-1.4.0
    • After downloading to Ubuntu, enter rknn-toolkit2-1.4.0the directory
      pip install packages/rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whl 
      
    • Wait for the installation to complete and check whether the installation is successful:
      python
      from rknn.api import RKNN
      
    • Success if no errors are reported.
    • If an error is reported:
      • 1. Whether it is rknn3.6in a virtual environment;
      • 2. pip install packages/rknn_toolkit2-1.4.0_22dcfef4-cp36-cp36m-linux_x86_64.whlWhether to report an error;
      • 3. pip installWhen an error is reported, it is prompted to use pip installor sudo apt-get installinstall what is missing;
  • The above requirements are all installed and verified successfully, then start the next step.

  • Convert best.onnxmodel to best.rknnmodel

    • Enter the conversion directory:
      cd examples/onnx/yolov5
      
    • It is best to make a copy test.pyand modify it:
      cp test.py ./mytest.py
      
    • Modify the file defined at the beginning, this is what I modified:
      ONNX_MODEL = 'best.onnx'    #待转换的onnx模型
      RKNN_MODEL = 'best.rknn'    #转换后的rknn模型
      IMG_PATH = './1.jpg'        #用于测试图片
      DATASET = './dataset.txt'   #用于测试的数据集,内容为多个测试图片的名字
      QUANTIZE_ON = True          #不修改
      OBJ_THRESH = 0.25           #不修改
      NMS_THRESH = 0.45           #不修改
      IMG_SIZE = 640              #不修改
      CLASSES = ("person")        #修改为你所训练的模型所含的标签
      
    • if __name__ == '__main__':The statement in will :
      rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]])
      
    • change into
      rknn.config(mean_values=[[0, 0, 0]], std_values=[[255, 255, 255]], target_platform='rk3588')
      
    • If you want the program to finish executing and show the reasoning effect, put the following statement:
      # cv2.imshow("post process result", img_1)
      # cv2.waitKey(0)
      # cv2.destroyAllWindows()
      
    • Comments open:
      cv2.imshow("post process result", img_1)
      cv2.waitKey(0)
      cv2.destroyAllWindows()
      
    • Terminal executes:
      python mytest.py
      
    • After running the display effect and appearing in the folder, best.rknnthe step is successful.
4. Deploy the rknn model on RKNN3588 and perform real-time camera inference detection
  • To install on the system RKNN3588, it should be noted that the system is based on the architecture, so the downloaded version is different from the previous version, and the corresponding version needs to be selected.Ubuntu20MinicondaRKNN3588Ubuntu20aarchMinicondaaarch
  • aarchMiniconda下载
  • Installation will not be repeated.
  • Create a virtual environment, because RK3588it needs to be used on the Internet, rknn-toolkit-lite2so it needs to be installed python3.7:
    • conda create -n rknnlite3.7 python=3.7
    • conda activate rknnlite3.7
  • Download rknn-toolkit-lite2to RK3588, that is, download rknn-toolkit2-1.4.0, no more details.
  • Installrknn-toolkit-lite2
    • enter rknn-toolkit2-1.4.0/rknn-toolkit-lite2directory
      pip install packages/rknn_toolkit_lite2-1.4.0-cp37-cp37m-linux_aarch64.whl
      
    • Wait for the installation to complete
    • Test whether the installation is successful:
      python
      from rknnlite.api import RKNNLite
      
    • success without error
  • Create a new folder under examplethe foldertest
  • Put your successfully converted best.rknnmodel and the files githubunder the warehouse at the beginning of the article in itdetect.py
  • detect.pyThe places that need to be modified in the file:
    • definition
      RKNN_MODEL = 'best.rknn'      #你的模型名称
      IMG_PATH = './1.jpg'          #测试图片名
      CLASSES = ("cap")             #标签名
      
    • if __name__ == '__main__'::
      capture = cv2.VideoCapture(11)      #其中的数字为你Webcam的设备编号
      
      • Regarding the device number, run in terminal:
        v4l2-ctl --list-devices     
        
      • The 11 corresponding to the printed Camwords /dev/video11is your device number.
  • Run the script:
    python detect.py
    
  • The deployment is complete.

Guess you like

Origin blog.csdn.net/Chuan423425/article/details/130205729