Code analysis of GST-VCU-APP part of HDMI VCU coding project

Code analysis of GST-VCU-APP part of HDMI VCU coding project

APP overview

GST_VCU_APP is a control software developed based on the GStreamer library
. The management of the PL end IP core can be realized through APP . APP completed the parameter configuration of the IP core on the PL side, and controlled the encoding and decoding of the vcu,
packaged the vcu-encoded data, and sent the packaged data to the Ethernet using the UDP protocol.
So how does the app manage the IP core? Let's expand on this issue. From the structure diagram of PS and PL of zcu106, the interaction between PS and PL is completed through the AXI bus.
Insert picture description here
APP is a software running on the linux system, and the linux system is stored in the PS-side DDR. The APU processor reads the instructions stored in the DDR through the CCI, and the DDR is the main memory of the APU. The processor writes control signals to the PL end IP core through the AXI bus.
In the vivado project, a corresponding address is set for each IP core. The device tree on the PS side describes the information (including the address) of the corresponding IP core. After the kernel is loaded, the system registers the driver according to the device tree and generates a virtual node. The beauty is here, that is to say, the virtual node in a linux system is corresponding to the IP core, then the user space can operate the IP core through the node (this is the basic operation of the ARM processor, but The data interaction here is carried out through the AXI protocol, which is more efficient and saves resources).
Insert picture description here
Software overall framework: (Refer to page 31 of ug1250. Note: For engineering issues, xilinx official website will basically have related documents or blog introductions)
Insert picture description here
VCU_GST_APP is a universal application software that can achieve multiple functions (both can be used for video Codec control can also be used for audio codec, this project only uses video coding). The following figure shows the calling relationship between application software libraries.
Insert picture description here
The calling relationship of the software library and the management relationship framework between the driver and the hardware device
Insert picture description here
Insert picture description here
As mentioned earlier, the system has registered a corresponding virtual node for each IP core. The IP core can be configured at runtime through the media node, and the operation of the IP core can be realized through video. HDMI Rx separates the received signal into video and audio signals (audio is not used here), and then converts the signal through VPSS; frame buffer write writes the data to the PS side DDR (how does frmbuf Wr know which address to write to DDR? What? The Frmbuf Wr IP core register is configured at the beginning of the PS side to make it work in the specified mode; then the PS side linux system sends the write address to Frmbuf Wr through the AXI bus to inform it of the place where the data is written), and finish writing a frame After the signal, the IP core Frmbuf Wr will send an interrupt signal to the PS side processor. Then the VCU hard core reads the unencoded data from the DDR (read and write via DMA), encodes it and puts it back into the DDR, and finally the PS side packs the encoded data into a UDP packet and sends it out.
In this process, DDR needs to open up memory to store the data transferred from the PL side. This memory development operation is completed by the V4L2 driver. The V4L2 driver opens up DMA_BUFFER and then transfers the obtained handle to the plug-in V4L2 plug-in of the upper Gstremer library. The V4L2 plug-in passes the handle to the gst-omx encoder plug-in, and the encoder plug-in passes the file file handle to the encoding driver, which encodes The driver can tell the PL where the VCU should read the data for encoding. (Please refer to the detailed introduction of 3.3 Gstreamer in the chapter about what is a plug-in) The following figure shows the relationship diagram of shared DMABuffer, DRM/KMS is related to display. (Reference: ug1250-zcu106-trd page 47)
Among them, the V4L2 driver opens up multiple DMABUFs for ping-pong processing. For example, three buffers are opened. At the beginning, the data is stored in buffer_1. The VCU reads data from this memory for encoding. During the encoding process, frmbufWr stores the data in buffer_2 again, and when the VCU finishes compiling the data in buffer_1 and packaging and sending it, it continues to encode the data in buffer_2, and frmbuf Wr starts to put the data in buffer_3 again... This cycle goes back and forth.
Insert picture description here

APP framework flow chart

The following program flow chart is to organize the information extracted from the code. If there is an understanding error or improper point, please correct it after pointing it out.
Main function:
Insert picture description here

Read configure file
Insert picture description here

Create media device
Insert picture description here
Bandwidth monitoring
Insert picture description here
vgst_config_options
Insert picture description here

Use Gstreamer library process
Insert picture description here
Pipeline used in this project
Insert picture description here

Development of Omxh265enc plug-in
Insert picture description here
capsfilter
Insert picture description here
rtpmp2tpay
Insert picture description here
mpegtsmux
Insert picture description here
DMA_BUFFER
Insert picture description here

Important functions in APP

(1) Necessary concept: media device framework: https://www.jianshu.com/p/83dcdc679901
abstracts the hardware into entities, and then connects them in a certain order to form a pipeline.
Register media device: media_device_register(struct media_device *mdev);
Unregister media device: media_device_unregister(struct media_device *mdev);
Media related API introduction:
http://www.staroceans.org/myprojects/v4l2-utils/utils/media-ctl /mediactl.h
(2) Vgst_init function, according to the existing media node, instantiate the media device
2.1 struct media_device *media_device_new(const char *devnode); instantiate the media device under the existing node, before using these devices Enumerate.

2.2.void media_device_unref(struct media_device *media); Decrease the count of an instantiated device by 1. When the device count reaches 0, release it.
Note: Each IP core corresponds to a device node, and the app goes through the library to the corresponding IP core Interactive.

(3) The Gstreamer library is initialized in the vgst_config_options function, and the device is found and configured according to the ip_param of app_data
: first read the video format of the camera, and after obtaining the format, write this format information into the DRM drive device. (DRM driver is used to handle DMA, memory management, resource lock and secure hardware access). We have used one encoding here, so the case in the vlib_src_config function to get the video format matches HDMI1. In the vcap_hdmi_set_media_ctrl function, the video format is obtained through the V4L2 driver.

(4) Memory mapping function void* mmap(void* start, size_t length, int prot, int flags, int fd, off_t offset);
Note: Memory mapping, in short, is to map a memory area of ​​user space to the kernel Space, after the mapping is successful, the user's modification of this memory area can be directly reflected in the kernel space. Similarly, the modification of this area in the kernel space can also directly reflect the user space. So the efficiency is very high for kernel space <----> user space operations that require a large amount of data transfer and other operations. The mmap system call enables processes to share memory by mapping the same common file. After the ordinary file is mapped to the process address space, the process can access the file like ordinary memory without calling read(), write() and other operations. mmap does not allocate space, it just maps the file to the address space of the calling process (but it will take up your virtual memory), and then you can use memcpy and other operations to write the file instead of write().

(5) The performance monitor (APM) is initialized in the perf_monitor_init() function, and a thread is created to always read the bandwidth data of the VCU in the AXI bus (the uPerfMon_getCounterValue function in the thread reads the data corresponding to the bus bus address, which is the VCU bandwidth Data), stored in the vcu_apm_counter array; used in the callback function time_cb() later.

(6) In the Time_cb() function, calculate the bandwidth of the codec and pipeline and call back once per second. Whether the calculation result is printed depends on the configuration file.

(7) parse_config_file (configure file path) opens the configuration file in read-only mode, judges the configuration information in the file, and assigns it to the app_datasa structure. (When to call the app_data structure)
(8) get_encoder_config (): Use the read configure file content to configure the corresponding encoding parameters

(9) The callback function of the pipeline is in the vgst_utils.c file (bus_callback), and the components in the pipeline are all in the play_ptr structure.
bus_callback callback function: each pipeline has its own bus, APP only needs to create its own message processor on the bus (here created with gst_bus_add_watch function), when the main loop of the Glib library is running, it will poll this Whether the processor has a new message, when the message is collected, the bus calls the corresponding callback function to complete the task (the callback function here is bus_callback).
Note:
1) First judge the received signal in bus_callback to determine whether an error occurred in the pipeline, the end of the data stream, or the metadata was found in the data stream. When the end of stream signal occurs, it will look for the component that issued this signal on the bus.
2) The g_main_loop_run () function starts the main loop of the Glib library. When an event is generated, it will process the related event; if no event is generated, it will enter the sleep waiting state, that is, it will be blocked in this function.
3) The value of app_data.playback is always false, so the do...while loop in the main app does not actually loop, so no new pipeline is created in the loop, so the key information interaction function in the pipeline is still bus_callback; vcu The encoding parameters are not reconfigured every time a frame of data is transmitted, but only configured once. When vcu encodes a frame of image, an interrupt is generated, and the PS responds to this interrupt.

(10) vgst_create_pipeline (): created each component and connected it into a pipeline. The structure app contains various components, network IP, PORT, device type, etc. Comparing app_data and app structure, app_data is almost similar to app except for some necessary variables in the main function. The special place in app is the playback array. The components that make up the pipeline are stored.
Insert picture description here
Insert picture description here

Gstreamer library overview

Gstreamer is a very powerful and versatile streaming media application framework. GStreamer is a framework for creating streaming media applications. The basic design idea comes from Oregon (Oregon) Graduate School's ideas about video pipelines, and also draws on DirectShow's design ideas from the modularity of its framework: Gstreamer can seamlessly merge new plug-ins. GStreamer's program development framework makes it possible to write any type of streaming media application. When writing applications that process audio, video, or both, GStreamer can make your job easier. GStreamer is not limited to audio and video processing, it can handle any type of data stream. The pipeline design method has almost no load on the actual applied filter, and it can even be used to design high-end audio applications with high requirements for delay. The most obvious use of GStreamer is to build a player. GStreamer already supports many file formats, including: MP3, Ogg/Vorbis, MPEG-1/2, AVI, Quicktime, mod, etc. From this perspective, GStreamer is more like a player. But its main advantage is: its pluggable components can be easily connected to any pipeline. This advantage makes it possible to use GStreamer to write a versatile editable audio and video application. The GStreamer framework is based on plug-ins, some plug-ins provide a variety of multimedia digital signal codecs, and some provide other functions. All plug-ins can be linked to any defined data flow pipeline. The pipeline of GStreamer can be edited by a GUI editor and can be saved as an XML file. This design makes the consumption of the pipeline library very small. The GStreamer core library function is a framework for handling plug-ins, data streams and media operations. The GStreamer core library also provides an API, which is open to programmers-when programmers need to use other plug-ins to write the applications they need.

Guess you like

Origin blog.csdn.net/qq_33827052/article/details/113108598