Xilinx Zynq-7000 series FPGA multi-channel video processing: image scaling + video splicing display, providing engineering source code and technical support


Xilinx Zynq-7000 series FPGA multi-channel video processing: image scaling + video splicing display, providing engineering source code and technical support

1 Introduction

Even if I have never played with image scaling and video splicing, I am embarrassed to say that I have played with FPGA. This is what a CSDN boss said, and I firmly believe it. . . This article uses Xilinx Zynq7000 series FPGA Zynq7020 to implement HLS image scaling + video splicing. The input video source uses the OV5640 camera module; the on-chip i2c controller of the Zynq soft core is called to configure the OV5640 to 1280x720@30Hz resolution; and then the two-way customization is called IP captures OV5640 camera video DVP to RGB888. The two inputs here come from the same OV5640, that is, one is used to simulate the two inputs; then the two Xilinx official Video In to AXI4-Stream IP cores are called to convert the RGB video stream into AXI4-Stream video stream; add two custom HLS image scaling IP cores to perform any size image scaling operation on the input video. This operation is configured through the Zynq soft core SDK software. Its essence is to configure the register through AXI_Lite; then call the two Use Xilinx's official VDMA IP to perform the PS side DDR3 video cache operation on the video, and call Zynq to configure the VDMA as a two-frame frame buffer. The essence is to do register configuration through AXI_Lite; then call two Xilinx's official Video Mixer IP to buffer the two frames. The cached video is used for video splicing operation, and Zynq is called to configure the different display positions of the two videos. The essence is to configure the register through AXI_Lite; then the two Xilinx official Video Timing Controller IP and AXI4-Stream to Video Out IP are called to convert the AXI4- Stream video stream is converted into an RGB video stream; then a custom HDMI sending IP is added to convert the RGB video into a TMDS differential video and sent to the monitor for display;

Provides a set of engineering source code and technical support for the vivado2019.1 version. Through the configuration of the SDK, three different sets of scaling and splicing solutions can be made. The details are as follows:

方案1:输入ov5640,分辨率1280x720;输出分辨率960x1080,在输出屏幕左右两边拼接输出;
方案2:输入ov5640,分辨率1280x720;输出分辨率1920x540,在输出屏幕上下两边拼接输出;
方案3:输入ov5640,分辨率1280x720;输出分辨率960x540,在输出屏幕左上角和右下角两边拼接输出;

For a detailed output demonstration of the solution, please see the "On-Board Debugging Verification and Demonstration" chapter later. To switch or change the three solutions, you only need to modify the SDK software code and do not need to modify the FPGA logic project;

This blog describes in detail Xilinx Zynq-7000 series FPGA multi-channel video processing: image scaling + video splicing display design scheme. The engineering code can be comprehensively compiled and debugged on the board, and can be directly transplanted to the project. It is suitable for school students and graduate student project development. It is also suitable for on-the-job engineers to learn and improve, and can be used in high-speed interfaces or image processing fields in medical, military and other industries; the entire project calls the Zynq soft core for IP configuration, and the Zynq configuration is run in the form of C language software code in the SDK , so the entire project includes two parts: FPGA logic design and SDK software design. It requires comprehensive capabilities in FPGA and embedded C language, and is not suitable for beginners or novices;

Provide complete, run-through engineering source code and technical support;
How to obtain the engineering source code and technical support is placed at the end of the article, please be patient until the end;

Disclaimer

This project and its source code include both parts written by myself and parts obtained from public channels on the Internet (including CSDN, Xilinx official website, Altera official website, etc.). If you feel offended, please send a private message to criticize and educate; based on this, this project The project and its source code are limited to readers or fans for personal study and research, and are prohibited from being used for commercial purposes. If legal issues arise due to commercial use by readers or fans themselves, this blog and the blogger have nothing to do with it, so please use it with caution. . .

2. Recommendation of relevant solutions

FPGA image processing solution

My homepage currently has an FPGA image processing column. The new column includes the FPGA image processing solutions I currently have, including image scaling, image recognition, image splicing, image fusion, image defogging, and image overlay. , image rotation, image enhancement, image character overlay, etc.; the following is the column address:
Click to go directly

FPGA image scaling solution

My homepage currently has a FPGA image scaling column. The new column includes the FPGA image scaling solutions I currently have. From the implementation method, there are image scaling based on HSL and images based on pure verilog code. Zoom; from the application perspective, it can be divided into single-channel video image scaling, multi-channel video image scaling, and multi-channel video image scaling and splicing; from the input video classification, it can be divided into OV5640 camera video scaling, SDI video scaling, MIPI video scaling, etc.; the following are Column address:
Click to go directly

Recommended FPGA video splicing and overlay fusion solution

My homepage currently has a column for FPGA video splicing, overlay and fusion. The new column includes the FPGA video splicing, overlay and fusion solutions that I currently have in hand. From the implementation method, there are video splicing based on HSL and video splicing based on pure verilog. Video splicing implemented by code; from the application perspective, it can be divided into single-channel, 2-channel, 3-channel, 4-channel, 8-channel, 16-channel video splicing; video scaling + splicing; video fusion overlay; from the input video classification, it can be divided into OV5640 camera video Splicing, SDI video splicing, CameraLink video splicing, etc.; the following is the column address:
Click to go directly

3. Detailed explanation of design ideas

Even if I have never played with image scaling and video splicing, I am embarrassed to say that I have played with FPGA. This is what a CSDN boss said, and I firmly believe it. . . This article uses Xilinx Zynq7000 series FPGA Zynq7020 to implement HLS image scaling + video splicing. The input video source uses the OV5640 camera module; the on-chip i2c controller of the Zynq soft core is called to configure the OV5640 to 1280x720@30Hz resolution; and then the two-way customization is called IP captures OV5640 camera video DVP to RGB888. The two inputs here come from the same OV5640, that is, one is used to simulate the two inputs; then the two Xilinx official Video In to AXI4-Stream IP cores are called to convert the RGB video stream into AXI4-Stream video stream; add two custom HLS image scaling IP cores to perform any size image scaling operation on the input video. This operation is configured through the Zynq soft core SDK software. Its essence is to configure the register through AXI_Lite; then call the two Use Xilinx's official VDMA IP to perform the PS side DDR3 video cache operation on the video, and call Zynq to configure the VDMA as a two-frame frame buffer. The essence is to do register configuration through AXI_Lite; then call two Xilinx's official Video Mixer IP to buffer the two frames. The cached video is used for video splicing operation, and Zynq is called to configure the different display positions of the two videos. The essence is to configure the register through AXI_Lite; then the two Xilinx official Video Timing Controller IP and AXI4-Stream to Video Out IP are called to convert the AXI4- Stream video stream is converted into an RGB video stream; then a custom HDMI sending IP is added to convert the RGB video into a TMDS differential video and sent to the monitor for display;

Provides a set of engineering source code and technical support for the vivado2019.1 version. Through the configuration of the SDK, three different sets of scaling and splicing solutions can be made. The details are as follows:

方案1:输入ov5640,分辨率1280x720;输出分辨率960x1080,在输出屏幕左右两边拼接输出;
方案2:输入ov5640,分辨率1280x720;输出分辨率1920x540,在输出屏幕上下两边拼接输出;
方案3:输入ov5640,分辨率1280x720;输出分辨率960x540,在输出屏幕左上角和右下角两边拼接输出;

For the detailed output demonstration of the solution, please see the "On-board debugging, verification and demonstration" chapter later. To switch or change the three solutions, you only need to modify the SDK software code, without modifying the FPGA logic project; the vivado project source code design block diagram is as follows :
Insert image description here
Block diagram explanation: The arrow represents the data flow direction, the text inside the arrow represents the data format, and the numbers outside the arrow represent the steps of the data flow direction;

Introduction to HLS image scaling

Since the IPs used in the project are commonly used IPs, here we focus on the HLS image scaling IP;
The maximum resolution supported: 1920x1080@60Hz; but the HLS source code can be modified to increase For large resolution, the premise is that your FPGA logic resources must be large enough;
Input video format: AXI4-Stream;
Output video format: AXI4-Stream; < /span> Please evaluate your FPGA resources carefully; The FPGA logic resources occupied by the module are as follows: Provides HLS project source code, which can be modified at will. , HLS version is 2019.1; Provides a customized configuration API, which can be easily used by calling the library function. Please refer to the SDK code for details;
requires SDK software configuration, which is essentially register configuration through AXI_Lite; currently only applicable to Xilinx Zynq7000 series FPGA, but the device type of the HLS project can be modified to adapt to other devices, such as Artix7, Kintex7, etc. etc.;




Insert image description here

Introduction to Video Mixer

Since the IPs used in the project are all commonly used IPs, here we focus on the Video Mixer IP;
Supports the maximum resolution: 8K, which means it can process videos up to 8K;< /span> Video Mixer logic resources are as follows, please carefully evaluate your FPGA resources; The module occupies smaller FPGA logic resources. Compared with the HLS video splicing written by yourself, the official Video Mixer resource occupies approximately less About 30% smaller and more efficient: provides a custom configuration API, by calling the The library functions can be easily used, please refer to the SDK code for details; requires SDK software configuration, which is essentially register configuration through AXI_Lite; Output video format: AXI4-Stream; Input video format: AXI4-Stream;
Supports up to 16 layers of video splicing and overlay, that is, up to 16 channels of video can be spliced;






Insert image description here

4. Introduction to vivado project

PL side FPGA logic design

Development board FPGA model: Xilinx–Zynq7020–xc7z020clg400-2;
Development environment: Vivado2019.1;
Input: OV5640 camera, resolution 1280x720p;
Output: HDMI, effective splicing video area display at 1080P resolution;
Engineering role: Xilinx Zynq-7000 series FPGA multi-channel video processing: image Zoom + video splicing display;
The project BD is as follows:
Insert image description here
The project code structure is as follows:
Insert image description here
The resource consumption and power consumption of the project are as follows :
Insert image description here

PS side SDK software design

PS side SDK software engineering code structure is as follows:
Insert image description here
The main function designs 3 different image scaling and splicing solutions through the following 3 macro definitions, the code is as follows:
Insert image description here
The details of 3 different image scaling and splicing solutions are as follows:

方案1:输入ov5640,分辨率1280x720;输出分辨率960x1080,在输出屏幕左右两边拼接输出;
方案2:输入ov5640,分辨率1280x720;输出分辨率1920x540,在输出屏幕上下两边拼接输出;
方案3:输入ov5640,分辨率1280x720;输出分辨率960x540,在输出屏幕左上角和右下角两边拼接输出;

According to the previous macro definition, the main function performs the corresponding image scaling operation and prints relevant information. The code is as follows:
Insert image description here

5. Project transplantation instructions

Vivado version inconsistency handling

1: If your vivado version is consistent with the vivado version of this project, open the project directly;
2: If your vivado version is lower than the vivado version of this project, you need to open it After the project, click File –> Save As; however, this method is not safe. The safest way is to upgrade your vivado version to the vivado version of this project or a higher version;
Insert image description here
3 : If your vivado version is higher than the vivado version of this project, the solution is as follows:
Insert image description here
After opening the project, you will find that the IPs are locked, as follows:
Insert image description here
At this time It is necessary to upgrade the IP, please do as follows:
Insert image description here
Insert image description here

FPGA model inconsistency handling

If your FPGA model is inconsistent with mine, you need to change the FPGA model. The operation is as follows:
Insert image description here
Insert image description here
Insert image description here
After changing the FPGA model, you also need to upgrade the IP. The method of upgrading the IP has been described previously. ;

Other things to note

1: Since the DDR of each board is not necessarily exactly the same, the MIG IP needs to be configured according to your own schematic diagram. You can even directly delete the MIG of my original project here and re-add the IP and reconfigure it;
2: Modify the pin constraints according to your own schematic diagram, just modify it in the xdc file;
3: Transplanting pure FPGA to Zynq needs to be done in the project Add zynq soft core;

6. Board debugging, verification and demonstration

Preparation

Zynq7000 series development board;
OV5640 camera;
HDMI display or LCD display, the LCD display I used has a resolution of 4.3 inches 800x480;

Output static presentation

ov5640 input resolution is 1280x720, HDMI output resolution is 960x1080;
Insert image description here
ov5640 input resolution is 1280x720, HDMI output resolution is 1920x540;
Insert image description here
ov5640 input resolution Rate 1280x720, HDMI output resolution 960x540;
Insert image description here

Output dynamic demonstration

A short video was recorded, and the output dynamic demonstration is as follows:

Zoom stitching

7. Benefits: Obtain project source code

Benefit: Acquisition of engineering code
The code is too large to be sent by email. It will be sent via a certain network disk link.
How to obtain data: Private, or the V business card at the end of the article.
The network disk information is as follows:
Insert image description here

Guess you like

Origin blog.csdn.net/qq_41667729/article/details/134624480