Jetson nano (4GB B01) system installation, official demo test (target detection, gesture recognition)

Jetson nano (4GB B01) system installation, official demo test (target detection, gesture recognition)

This article ensures that you can correctly build the jetson nano environment and run through the official "hello AI world" demo. The core steps are all from the first-hand official tutorial. If you can't access it, you can use a proxy or change the com domain name to cn domain name:

Getting Started With Jetson Nano Developer Kit

Note: The official "hello AI world" demo comes from the jetson-inference API repository. It is not as complicated as deepstream, but it is also based on tensorrt acceleration and makes full use of jetson's hardware codec. The installation and usage of deepstream are not described in this document.

The first step, hardware preparation and installation

Refer to the table below to check the hardware and equipment you need to prepare

# name effect provider Remark
1 Modules and Carriers Core components official It should already be plugged together when you get it.
2 fan cooling unofficial
3 DC power supply powered by unofficial Jetson supports two power supply modes:
1) USB
2) DC
Try to use DC power supply to ensure that 5V/4A output can be provided, so that jetson can work in a high power state
4 micro-SD (32GB), card reader Image burning/Disk unofficial
6 A jumper cap Make sure DC power is available -
7 USB keyboard, mouse, HDMI or DP cable and monitor, network cable - - -
8 Acrylic case or official carton - - -

If you bought the jetson nano from JD.com or Taobao, the manufacturer will basically provide you with all the hardware you need except for the keyboard, mouse and monitor.

Refer to the picture below, you can install the hardware you want, be careful not to plug in the power supply, wait until we burn the system SD card and then plug in the power supply.
insert image description here

Continue to refer to the figure below, insert the jumper cap, pay attention to insert both pins, otherwise the machine will not be able to turn on
insert image description here

Final product reference:
insert image description here

The second step is to install the system image using jetpack

Jetpack can be simply understood as a jetson-specific image package. In addition to the basic ubuntu system, the operating system installed using it also includes the following components related to AI development: L4T Kernel / BSP, CUDA Toolkit, cuDNN, TensorRT, OpenCV, VisionWorks , Multimedia API's

ok, start

  1. Find a computer and download the official image from https://developer.nvidia.com/jetson-nano-sd-card-image (if you can’t open it, change com to cn and try again)

  2. Insert the SD card into the card reader, then into your computer

  3. Download the SD card formatting tool from https://www.sdcard.org/downloads/formatter_4/eula_windows/ , install and open it, refer to the picture below and click "Format" to format the SD card once
    insert image description here

  4. Download the burning software from https://www.balena.io/etcher , install and open it, refer to the image below to select the image package you just downloaded, and click Flash to burn the image to the SD card according to the prompt (if a pop-up window appears midway , all click cancel)

insert image description here

  1. Refer to the picture below, insert the SD card into the jetson nano, connect the DC power supply, plug in the mouse, keyboard and monitor, and install the system (the same as the conventional system installation method, there is a step that will let you choose "APP partition size", just choose the largest )
    insert image description here

  2. See the following interface is successful

insert image description here

The third step, the official Hello AI World demo test

Note: After jetpack installs the system, try not to run the demo according to unofficial steps, otherwise your demo may not be able to run due to the lack of nano hardware, and you may also spend a lot of money on the basic library version energy.

Here we still install jetson-inference and run the official demo according to the official introductory tutorial.

1. According to the following command, download jetson-inference source code, compile and install

$ sudo apt-get update
$ sudo apt-get install git cmake libpython3-dev python3-numpy
$ git clone --recursive https://github.com/dusty-nv/jetson-inference
$ cd jetson-inference
$ mkdir build
$ cd build
$ cmake ../
$ make -j$(nproc)
$ sudo make install
$ sudo ldconfig

When you execute the above "cmake .../" command, the following interface will pop up. This interface allows you to choose to download the official pre-trained model interface. Here we don't need to choose, just press Enter to download the default model
insert image description here

After the download is complete, the following interface will pop up to let you choose to download the pytorch version, control the arrow keys to the python 3.6 version, click the space to select, and then click Enter to start the download

insert image description here

Well, wait for a while jetson-inference and all the components it needs will be downloaded, continue to execute the remaining commands to complete the compilation and installation.

After jetson-inference is installed, several programs that support different AI models will be installed in the system. Basic tasks can be completed by using these programs and selecting the supported pre-training models. Refer to the following:

insert image description here

Next, experience jetson through two demos, one for target detection and one for real-time gesture recognition

2. Use detectnet and the default model SSD-Mobilenet-v2 to detect objects in pictures

Enter the /build/aarch64/bin directory, and run the following command for reference. Note that the first time you run this model, you will wait for a few minutes to optimize the model, just wait patiently

# C++
$ ./detectnet --network=ssd-mobilenet-v2 images/peds_0.jpg images/test/output.jpg     # --network flag is optional

# Python
$ ./detectnet.py --network=ssd-mobilenet-v2 images/peds_0.jpg images/test/output.jpg  # --network flag is optional

result:
insert image description here

In addition to supporting pictures, the program also supports real-time recognition of videos or cameras, refer to the following commands

$ ./detectnet /usr/share/visionworks/sources/data/pedestrians.mp4 images/test/pedestrians_ssd.mp4  #本地视频
$ ./detectnet csi://0                    # CSI摄像头
$ ./detectnet /dev/video0                # USB摄像头
$ ./detectnet /dev/video0 output.mp4     # USB摄像头保存

At the same time, we can specify to use different models for target detection by modifying the --network parameter, refer to:

insert image description here

3. Use posenet and default network to recognize gestures in camera in real time

Prepare a USB camera, plug it in, and run the following command for real-time gesture recognition:

# C++
$ ./posenet --network=resnet18-hand /dev/video0

# Python
$ ./posenet.py --network=resnet18-hand /dev/video0

Effect:
insert image description here

4. Use jetson hardware decoder to accelerate video decoding

The first step in video target detection is decoding. The powerful part of jetson is its own hardware codec. Let's look at the demo, run the following program, and specify hard decoding by specifying --input-codec

detectnet --input-codec=CODEC /usr/share/visionworks/sources/data/cars.mp4

Use the jtop command (refer to the installation in the next section), you can see that the hardware decoder has been started
insert image description here

Run detectnet --help to view codec support.

Finally, the jetson-stats tool is installed

Nvidia currently has no official system performance monitoring tools. We can use the jetson-stats toolkit to monitor system performance and various indicators in real time. Its official website link: https://github.com/rbonghi/jetson_stats

Run the following command to install:

sudo -H pip3 install -U jetson-stats

If it prompts that there is no pip3, run the following command

sudo apt-get install python3-pip

After the installation is complete, reboot your system, you can use it, run the jtop command directly

sudo jtop
# 如果无法启动,运行下面命令手动启动服务
# systemctl restart jetson_stats.service

insert image description here
In addition to the jtop tool, jetson stats also installs jetson_config, jetson_release, jetson_swap and other tools. For specific meanings and usage methods, refer to the official website.

Guess you like

Origin blog.csdn.net/delpanz/article/details/127223321