NXP i.MX 8M Mini Video Development Case Sharing (Part 1)

This article mainly introduces the video development cases of i.MX 8M Mini, including video capture, codec, algorithm processing, display and storage cases based on GStreamer, GigE industrial camera test instructions, H.265 video hardware decoding function demonstration instructions, etc.

Note: The evaluation version of this case uses Chuanglong Technology TLIMX8-EVM, which is a high-performance evaluation board based on the quad-core ARM Cortex-A53 + single-core ARM Cortex-M4 heterogeneous multi-core processor design of NXP i.MX 8M Mini ,

Consists of a core board and an evaluation board. The ARM Cortex-A53 (64-bit) main processing unit has a main frequency of up to 1.6GHz, and the ARM Cortex-M4 real-time processing unit has a main frequency of up to 400MHz. The processor adopts the latest 14nm technology, supports 1080P60 H.264 video hardware codec, 1080P60 H.265 video hardware decoding, and GPU graphics accelerator. The core board has been verified by professional PCB layout and high and low temperature tests. It is stable and reliable, and can meet various industrial application environments. The front appearance is as follows:

The file system of the evaluation board already supports the GStreamer library by default. You can execute the following command to view the usage instructions of the GStreamer library.

Target# gst-inspect-1.0 -h

figure 1

GStreamer is an open source multimedia framework used to build streaming media applications. Its goal is to simplify the development of audio and video applications. Currently, it can be used to process multimedia data in various formats such as MP3, Ogg, MPEG1, MPEG2, AVI, and Quicktime. GStreamer development reference link: GStreamer .

The modules applicable to the case are as follows:

Table 1

case name

Applicable modules

gst_mjpeg_dec_cv_edge

Zhonghui ZH5640-MIC-001

(USB OV5640 camera)

gst_rtsp_dec_display

Hikvision DS-IPC-B12HV2-IA

(webcam)

gige_capture

Basler acA720-290gm

(GigE industrial camera)

1. gst_mjpeg_dec_cv_edge case

1.1 Case Description

In this case, GStreamer API is used to obtain MJPEG format video stream from USB camera through ARM Cotrex-A53 and decode it by software, then use OpenCV to process the image with Sobel (edge ​​detection) algorithm, and then display the processed image on the display screen in real time.

The program workflow block diagram is as follows:

figure 2

The OpenCV version used in this case is 4.4.0, and the OpenCV development reference document: https://docs.opencv.org/4.4.0 .

1.2 Case Test

Please connect the hardware according to the figure below, connect the USB OV5640 camera (Zhonghui ZH5640-MIC-001) to the USB2 HOST interface of the evaluation board, and connect the HDMI display to the HDMI OUT interface of the evaluation board.

 

image 3

The development case is located in the product documentation "4-software documentation\Demo\video-demos\" directory, copy the gst_mjpeg_dec_cv_edge file in the case bin directory to the evaluation board file system. After the evaluation board is powered on and enters the file system, you can execute the following commands to query the camera's frame rate, resolution, device node and other parameters.

Target# gst-device-monitor-1.0

 

Figure 4

In the path where the gst_mjpeg_dec_cv_edge file is located, execute the following command to query the gst_mjpeg_dec_cv_edge program parameter description, as shown in the following table.

Target# ./gst_mjpeg_dec_cv_edge --help

 

Figure 5

Execute the following command to collect the video stream of the "/dev/video1" device node. The video stream will be processed by the Sobel algorithm in the Cortex-A53, and then the processed image will be displayed on the display in real time.

Target# systemctl start [email protected] //Open the Weston interface

Target#./gst_mjpeg_dec_cv_edge -d /dev/video1 -w 1920 -h 1080 -f 15

 

Figure 6

Table 2

parameter

analyze

Image width、Image height

Camera capture resolution size

Duration

Acquisition duration

Sobel cost time

Image edge processing average time

Capture framerate

Image acquisition frame rate

Sobel framerate

Image edge processing frame rate

From the print results, it can be seen that the average time-consuming of image processing is about 52.57ms, the frame rate of image acquisition is 15fps, and the frame rate of image processing is 15fps.

Remark:

(1) The average time of image edge processing refers to the time consumed by calling the OpenCV Sobel algorithm for edge detection.

(2) The total time used to calculate the frame rate of image edge processing, including all the time spent in obtaining data from the Gstreamer queue for edge detection and transferring data to the Gstreamer queue.

This case uses the Sobel algorithm to process the effect diagram as shown below.

 

Figure 7

In this case, Cortex-A53 is used for image acquisition, software decoding and algorithm processing. The CPU usage rate of this test is 120%, as shown in the figure below.

Remarks: i.MX 8M Mini has 4 Cortex-A53 cores, and the CPU usage can reach up to 400%.

 

Figure 8

1.3 Case analysis

1.3.1 GStreamer Pipeline Diagram

 

Figure 9

Examples of pipeline commands are as follows. The following commands are just examples and cannot be run directly on the terminal.

Appsink:gst-launch-1.0 v4l2src device=/dev/video1 ! 'image/jpeg, width=1920, height=1080, framerate = 15/1' ! jpegdec ! appsink emit-signals=true, sync=false

Appsrc:gst-launch-1.0 appsrc ! 'video/x-raw, format=I420, width=1920, height=1080, framerate=15/1' ! stream-type=0, format=time ! imxvideoconvert_g2d ! autovideosink sync=false

1.3.2 Key code description

(1) Initialize GStreamer and create a Pipeline.

 

Figure 10

(2) Create and initialize the GStreamer component, and then check the initialization of the component.

 

Figure 11

(3) Configure the parameters of each component.

 

Figure 12

(4) Link components to Pipeline.

 

Figure 13

(5) Create a bus so that the application can receive the Pipeline message, change the Pipeline state to playing, and make it start working.

 

Figure 14

(6) Stop the Pipeline.

 

Figure 15

(7) Obtain the decoded image data and carry out edge detection, if edge detection is being carried out, then discard this frame.

 

Figure 16

(8) In the edge detection thread (sobel_thread), edge detection is carried out to the image data by the OpenCV Sobel algorithm, and the processed image data is sent to the appsrc Pipeline.

 

Figure 17

(9) In the time thread (time_thread), the running time of the statistical program is counted in seconds, and the average image processing time, image acquisition and processing frame rate are output.

 

Figure 18

 

Figure 19

1.4 Case Compilation

Copy the source code of the case to the Ubuntu working directory, enter the src source code directory, and execute the following command to load the SDK environment variables.

Host# source /home/tronlong/SDK/environment-setup-aarch64-poky-linux

 

Figure 20

Execute the make command to compile.

Host#make

 

Figure 21

After compiling, an executable file gst_mjpeg_dec_cv_edge will be generated in the current directory.

Figure 22

2 gst_rtsp_dec_display case

2.1 Case Description

In this case, GStreamer API is used to obtain H.264 format video stream data from the network camera through Cotrex-A53, and then H.264 video hardware decoding is performed through VPU, and then the decoded image is displayed on the display screen in real time.

The program workflow block diagram is as follows:

 

Figure 23

2.2 Case Test

 

Figure 24

Please connect the hardware according to the above figure. The development case is located in the product documentation "4-Software Documentation\Demo\video-demos\" directory, and copy the gst_rtsp_dec_display executable file in the case bin directory to the evaluation board file system. In the path where the gst_rtsp_dec_display file is located, execute the following command to query the program parameter description, as shown in the figure below.

Target#./gst_rtsp_dec_display --help

 

Figure 25

The IP address of the webcam used in this case is 192.168.0.178. Please ensure that the IP addresses of the evaluation board and the webcam are in the same network segment.

 

Figure 26

Execute the following commands to collect video stream data, and display the decoded video stream data on the screen in real time.

Target#  systemctl start [email protected]//Open Weston

Target# ./gst_rtsp_dec_display -u rtsp://admin:[email protected]:554/h264/ch1/main/av_stream -w 1920 -h 1080 -f 25 -s 1//192.168.0.178 is the IP address of the camera

 

Figure 27

 

Figure 28

The CPU usage rate in this test is 63.7%, as shown in the figure below.

Remarks: i.MX 8M Mini has 4 Cortex-A53 cores, and the CPU usage can reach up to 400%.

 

Figure 29

2.3 Latency test

Latency test method: use the camera to collect the online stopwatch image of the PC display. The time difference between the PC display screen and the evaluation board display screen is the delay. Perform multiple tests, and take the average value of the delay results.

table 3

serial number

Evaluation board screen display

PC screen display

Latency ( ms )

1

00:01:28.393

00:01:27.951

442

2

00:02:29.024

00:01:28.551

473

3

00:03:38.792

00:01:38.315

476

4

00:04:20.983

00:01:20.513

470

5

00:05:58.084

00:01:57.635

449

average value

/

/

462

2.4 Case analysis

2.4.1 Schematic diagram of GStreamer pipeline

 

Figure 30

2.4.2 Key code description

(1) Initialize Gstreamer and create a Pipeline.

 

Figure 31

(2) Create and initialize the GStreamer component, and then check the initialization of the component.

 

Figure 32

(3) Configure the parameters of each component.

 

Figure 33

(4) Link components to Pipeline.

 

Figure 34

(5) Create a bus so that the application can receive Pipeline messages, change the Pipeline status to playing, and make it work.

 

Figure 35

(6) Stop the pipeline.

 

Figure 36

2.5 Case Compilation

Copy the source code of the case to the Ubuntu working directory, enter the src source directory and execute the following command to load the SDK environment variables.

Host# source /home/tronlong/SDK/environment-setup-aarch64-poky-linux

 

Figure 37

Execute the make command to compile.

Host# make

 

Figure 38

After compiling, an executable file gst_rtsp_dec_display will be generated in the current directory.

 

Figure 39

If you want to know more about embedded applications, welcome to pay attention to Tronlong Chuanglong Technology~

Guess you like

Origin blog.csdn.net/Tronlong/article/details/131438792