[jetson nano] jetson nano environment configuration + yolov5 deployment + tensorRT acceleration model

Jetson nano environment configuration + yolov5 deployment + tensorRT acceleration model

It took more than a week to configure the environment and deploy the model intermittently. During this period, countless errors were reported. I referred to many documents and asked friends around me for help, so I recorded this process in as much detail as possible.

thank you

Thanks to Xnhyacinth for helping me during the configuration process hahaha ꒰ঌ( ⌯' '⌯)໒꒱

Host and jetson nano environment

The environment on my host is python3.9, cuda11.6
jetson nano environment jetpack4.6, cuda10.2, python3.6 (conda)
jtop view system configuration

Jetson system boot burning, system settings, source change

Familiarize yourself with each interface and its functions first. For details, please refer to NVIDIA official documents (see below). First, assemble the fan you bought and prepare the wireless network card. I use a plug-and-play usb version of the wireless network card, which is sold on JD. , When assembling, I found that there is a jumper cap on the jetson nano development board, which is not shorted by default. In this case, it can only be connected to the computer power supply. The jumper cap must be shorted before connecting to the DC power supply.
NVIDIA jetson nano official website
The initial burning, sd card formatting and related work are all based on this, basically there is no problem. After the completion, you can get an Ubuntu18.04 system, and then if there is a wireless network card that can directly connect to WiFi, I usually connect to the development board through xshell and operate directly on the host, except when I need to view pictures or videos.
At this stage, I also partially refer to the following documents, so I will not repeat them here as a record.
Nvidia Jetson Nano introduction and usage guide
Jetson Nano from entry to actual combat (case: Opencv configuration, face detection, QR code detection) In this article, I mainly refer to parts 2.4.1 to 2.4.3, and configure some system environments And change the source, because the speed of downloading things may be slow without changing the source. There are many online materials on how to change the source, and you can check it yourself.

python environment configuration

I didn’t use the virtual environment at the beginning. The built-in environment was python2.7. I downloaded 3.6 first, and then installed torch in the environment. It took several days to install all the environments, and finally when I ran yolov5, I still reported an error, and then I asked other people to let me install the conda virtual environment, so I started from scratch and reinstalled the conda environment. I won't talk about the previous pits, we will install the conda environment directly. It is recommended that those who are stuck here can also directly use the conda environment, which is indeed much more convenient.

conda-environment

Before installing the conda environment, we must always be aware of a problem. The jetson nano is based on the aarch64 architecture, so the Jetson Nano development board cannot successfully install Anaconda. Download all environment configurations or online search strategies must add jetson nano or arm64 bit architecture. The same is true for the conda environment. Both miniconda and anconda do not support jetson nano. We need to download archiconda (Archiconda is a Conda distribution for 64-bit ARM platform). The commands and usage are the same as the previous two, so it is also very convenient. The download address is as follows.
Download address
After the installation is complete, create the environment as on the Windows system, (you can search for how to use conda to create the environment, I installed python3.6 here) After the creation is complete, activate the environment (I named the environment yolov5)

#激活环境
conda activate yolov5

Then we need to install many packages first, many, many. . I refer to the link below. It is suggested that you can follow the author's writing before the seventh step of this article, especially the section of downloading some packages and cmake, and copy the commands one by one to install, otherwise you will find that many packages are not downloaded successfully when you install the environment later.
Jetson Nano deploys YOLOv5 and Tensorrtx acceleration——(Go through the whole process and record it yourself)

yolov5 environment

Then you can directly transfer your yolo code to the development board (I use xftp), or you can test it with the official code first, and run the environment first. Here I ran it with the official code first and then put my own code.
First install a git on jetson nano, and then directly

git clone https://github.com/ultralytics/yolov5.git

It should be v7.0 by default, it doesn't matter for testing anyway. After git is down, cd to the main directory of yolov5, directly

pip install -r requirements.txt

Some downloads should be unsuccessful, don’t worry, you can try to see which packages need to be installed, install them one by one, try to choose a lower version, if you can’t manually download the installation package to the local and install it, there are some packages you can’t find You can go to pypi.org to search and download, pay attention to select the historical version, and then select the appropriate python version and system version, keywords "linux", "cp36" and "arrch64".
arrch64

matplotlib and opencv-python

The situation I encountered at the time was that two packages could not be installed, which was delayed for a day, one was matplotlib and the other was opencv-python. For the second package, I even referred to a lot of tutorials on compiling and installing opencv on the Internet. I have been tossing about compiling and installing with c++ for a long time. In fact, I don’t need it. Just refer to Jetson Nano to deploy YOLOv5 and Tensorrtx acceleration——(Go through the whole process by myself ) and after the author's environment configuration is installed, it will be solved.
After the installation of all environments is completed, switch to the main directory directly from the command line and then enter and python detect.pyexecute. If the download speed is slow, you can go to the yolov5 official warehouse to download yolov5s.pt to the main directory in advance. If no error is reported, run/detect/exptwo pictures will be generated in the directory, which is It worked.

tensorRT acceleration

tensorRT essentially compresses the model to make it faster, because jetson nano has insufficient computing power, so it needs to be accelerated. The official warehouse is here . The documents in the warehouse I refer to README.mdare converted into model files in wts and engine formats according to his instructions.
It also includes the tensor acceleration part of the following articles . But this part of the following article is for reference only . Make sure that your yolo and various versions are consistent with the tensorRT you downloaded (in fact, it is very likely to be inconsistent, that is to say, it cannot be used even if the engine file is generated).
Deploy your own Yolov5 model on Jetson Nano (TensorRT acceleration) onnx model to engine file
Jetson Nano deployment YOLOv5 and Tensorrtx acceleration - (go through the whole process record yourself)
The reason why it is only for reference is because

1. Some of the content in it is inconsistent with the actual situation.
For example, modify the number of trained models. The default is 80. We don’t need to modify it, but if it is a model trained by ourselves, it needs to be modified. The modification address is not what they said tensorrtx/yolov5/yololayer.h, but should refer to the official website. This sentence in the document README :

 cd [PATH-TO-TENSORRTX]/yolov5/
 # Update kNumClass in src/config.h if your model is trained on custom dataset

The above comment mentioned that if you want to modify the number of categories, you should update the variables src/config.hin the file , but it is still possible to report an error. TensorRT generates an exception in the engine file ([TRT] Network::addScale::434, condition: shift.count > 0 ?) ( This is one of the possible problems. I also encountered other errors, such as generating an engine file but not being able to adapt it, etc.). If you use the following method, this problem will not occur.kNumClass
Official repository documentation

2. In fact, you don’t need to use the above method. If you are running the yolo model, it
actually exists in the main directory of yolov5 export.py. You can directly call the tensorRT package to convert the model from xx.pt to xx.engine. It is very convenient and does not require other operations . And jetson nano has its own tensorRT package, we only need to establish a soft connection with the installation directory of the package in our conda environment. See the next section for specific methods

Using TensorRT in Jetson Nano's conda virtual environment

Create a soft link

The system installation path of TensorRT is: /usr/lib/python3.6/dist-packages/tensorrt/(This is the same for everyone)
Execute the following command to create a soft link corresponding to the virtual environment (your own virtual environment) (note the installation directory of your own archiconda , this is my directory for reference only /home/alen123/archiconda3/envs/yolov5/lib/python3.6/site-packages)

sudo ln -s /usr/lib/python3.6/dist-packages/tensorrt* /home/alen123/archiconda3/envs/yolov5/lib/python3.6/site-packages

view version

>>> python
>>> import tensorrt
>>> tensorrt.__version__

Run export.py and detect.py

After completing the previous steps, tensorRT can be called directly as a package, and then execute the following code directly on the terminal:

python export.py --weights yolov5.pt --include engine -- device 0 

An error may be reported, telling you that some packages are not installed. Don’t worry about installing them at this time. First cat requirements.txt, search for the corresponding package. For onnxexample, find requirements.txtthe version corresponding to onnx in it, and download the minimum version required by the file directly on the command line. The default should be Commented out, in Export, mine is

# onnx>=1.9.0  # ONNX export

so directly

pip install onnx==1.9.0

The same goes for other packages.
Re-run

python export.py --weights yolov5.pt --include engine -- device 0 

An engine file will be generated. If you want to run faster on the development board, you can increase the speed --half, reduce the accuracy, and increase the speed significantly. I didn’t use tensorRT when running my own model. The acceleration speed is >100ms per frame. After acceleration, --halfthe speed before acceleration is about 50ms per frame, and after acceleration --halfis 30ms per frame.

python export.py --weights yolov5.pt --include engine --half -- device 0

Then a xx.engine file will be generated, and then detect.py will be executed. You can modify it in the file. If you are too lazy to change it, you can directly modify the model from the command line:

python detect.py --weights yolov5.engine

In this way, you're done. It's the same if you replace it with your own trained model. Pay attention to the changes in the model name and data path.

other errors

In fact, there are many other trivial errors in one pass, but some of them were not fully recorded during the debugging process, and I may add them later when I think about it.

KeyError: ‘names’

  File "detect.py", line 368, in <module>
    main(opt)
  File "detect.py", line 262, in main
    run(**vars(opt))
  File "/home/alen123/archiconda3/envs/yolov5/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "detect.py", line 98, in run
    model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
  File "/home/alen123/yolov5-7.0/models/common.py", line 500, in __init__
    names = yaml_load(data)['names'] if data else {
    
    i: f'class{i}' for i in range(999)}
KeyError: 'names'

I encountered it when running my own model. The reason is that there is no model category name in the yaml file of the data. Just add it at the end of the yaml file of the model I trained. My model only has two classes of switches, so I'm going to add the following.

names:
  0: open
  1: closed

Summarize

It took 3 hours to write, the content is a bit hasty, and may be added slowly, you can communicate with me, please correct me if there are any mistakes. Generally speaking, the configuration and use of jetson nano in the industry are relatively complete, and the information is relatively complete, so it takes a little time to find it online. I would like to make a backup and record of this article for your reference.

Other reference articles that may be useful

Nvidia Jetson nano installs Archiconda, gpu version torch, stepping on the pit record
win10 yolov5 converts Tensorrt's engine model to the full process installation and operation interpretation (using TRT's PythonApi)

image display

run screen show
Appearance Gallery

Guess you like

Origin blog.csdn.net/weixin_46007139/article/details/129597153