Look at this [Jetson-Nano] jetson_nano environment configuration and installation of tensorflow and pytorch
look at this Zatan _ taught jetsonNano environment to build
1. Jetson Nano replacement software source
Nano mirrors are from foreign sources by default, and the speed is very slow. Some domestic sources cannot be uploaded, and some packages cannot be installed. After testing, the source of Tsinghua University is perfectly usable.
Backup
sudo cp /etc/apt/sources.list /etc/apt/sources.list.bak
sudo vim /etc/apt/sources.list
# 删除所有内容,用下面的内容替换
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-security main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-updates main multiverse restricted universe
deb-src http://mirrors.tuna.tsinghua.edu.cn/ubuntu-ports/ bionic-backports main multiverse restricted universe
Save to sources.list, then open the terminal to input
sudo apt-get update
2. Jetson nano turns on the best performance mode
View power mode
sudo nvpmodel -q
If the display mode ID is 0, the 10W mode has been turned on, and the following settings are not required
#5w模式: sudo nvpmodel -m 1
#10w模式:sudo nvpmodel -m 0
Run after setting
sudo jetson_clocks --show
3. Jetson nano install jtop
Jetson Nano does not have nvidia-smi command, so install jtop instead
sudo apt-get install libhdf5-serial-dev hdf5-tools libatlas-base-dev gfortran
pip install jetson-stats
jtop
4. Jetson nano closes the graphical interface
Jetson Nano's 4GB memory is shared by CPU and GPU. If we want to run deep learning programs, we need to close the graphical interface to reduce memory usage.
Close the graphical interface
sudo systemctl set-default multi-user.target
sudo reboot
Open the graphical interface
sudo systemctl set-default graphical.target
sudo reboot
5. Jetson nano backup TF card
Backup
sudo dd if=/dev/sdb | gzip >/home/workspace/nano.img.gz
restore
sudo gzip -dc /home/workspace/nano.img.gz | sudo dd of=/dev/sdb
6. Jetson nano uses trt to accelerate
To put it bluntly, it is to convert the pytorch model into a trt model, and then load the trt model to predict the image.
Nvidia intimately provided: Torch2trt tool.
I tried it and it didn't work very well. One is that some layers will not support it, and the other is that the pytorch1.6 version does not support the use of "/" (division), which is inconvenient to use.
Find another open source tool.
This tool supports onnx2trt, which bypasses the division problem of pytorch1.6.
7. Jetson nano cheap pip
sudo apt-get install python3-pip python3-dev
8. Modify environment variables
sudo vim /etc/profile
export PATH=/usr/local/cuda-10.2/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
export CUDA_HOME=$CUDA_HOME:/usr/local/cuda-10.2