The most detailed explanation of Zynq GTX in the whole network, aurora 8b/10b codec, OV5640 camera video transmission, provide 2 sets of engineering source code and technical support

1 Introduction

I am ashamed to say that I have played FPGA before if I have never played GT resources. This is a sentence said by a big guy at CSDN, and I firmly believe it. . .
GT resources are an important selling point of Xilinx series FPGAs, and they are also the basis for high-speed interfaces. Whether it is PCIE, SATA, MAC, etc., GT resources are required for high-speed serialization and deserialization of data. Different FPGA series of Xilinx have different GT resource type, the low-end A7 has GTP, K7 has GTX, V7 has GTH, and the higher-end U+ series has GTY, etc. Their speed is getting higher and higher, and their application scenarios are becoming more and more high-end. . .

This article uses the GTX resources of Xilinx's Zynq7100 FPGA for video transmission experiments. There are two types of video sources, which correspond to whether the developer has a camera or not. One is to use a cheap OV5640 camera module; if you don't have a camera, Or if your development board does not have a camera interface, you can use the dynamic color bars generated inside the code to simulate the camera video; the video source is selected through the `define macro definition at the top of the code, and ov5640 is used as the video source by default; call the GTX IP core, use Verilog writes video data encoding and decoding module and data alignment module, and uses two SFP optical ports on the development board hardware to realize data transmission and reception; this blog provides two sets of vivado project source code, the difference between the two projects is the use of one SFP optical port Is there two SFP optical ports for sending and receiving? This blog describes the design scheme of FPGA GTX video transmission in detail. The engineering code can be comprehensively compiled and debugged on the board, and can be directly transplanted. It is suitable for the development of projects for students and graduate students. , is also suitable for on-the-job engineers to do learning improvement, and can be applied to high-speed interfaces or image processing fields in medical, military and other industries; provide
complete and smooth engineering source code and technical support; the
acquisition method of engineering source code and technical support is placed in At the end of the article, please be patient to see the end;

disclaimer

This project and its source code are partly written by myself, and partly obtained from public channels on the Internet (including CSDN, Xilinx official website, Altera official website, etc.). The project and its source code are limited to the personal study and research of readers or fans, and commercial use is prohibited. If the legal issues caused by the commercial use of readers or fans for their own reasons have nothing to do with this blog and the blogger, please use it with caution. . .

2. The GT high-speed interface solution I have here

My homepage has a FPGA GT high-speed interface column, which includes video transmission routines and PCIE transmission routines for GT resources such as GTP, GTX, GTH, and GTY, among which GTP is based on A7 series FPGA development boards, and GTX is based on K7 or ZYNQ series FPGA development board is built, GTH is built based on KU or V7 series FPGA development board, GTY is built based on KU+ series FPGA development board; the following is the column address:
click to go directly

3. The most detailed interpretation of GTX network

The most detailed introduction to GTX must be Xilinx's official "ug476_7Series_Transceivers", we use this to interpret:
I have put the PDF document of "ug476_7Series_Transceivers" in the data package, and there is a way to obtain it at the end of the article; the
FPGA model of the development board I used It is Xilinx Kintex7 xc7k325tffg676-2; it has 8 channels of GTX resources, 2 of which are connected to 2 SFP optical ports, and the sending and receiving speed of each channel is between 500 Mb/s and 10.3125 Gb/s. GTX transceivers support different serial transmission interfaces or protocols, such as PCIE 1.1/2.0 interface, 10 Gigabit network XUAI interface, OC-48, serial RapidIO interface, SATA (Serial ATA) interface, digital component serial interface (SDI) etc;

GTX basic structure

Xilinx uses Quad to group serial high-speed transceivers. Four serial high-speed transceivers and a COMMOM (QPLL) form a Quad. Each serial high-speed transceiver is called a Channel (channel). The figure below shows four The schematic diagram of the GTX transceiver in the Kintex7 FPGA chip: "ug476_7Series_Transceivers" page 24; the
insert image description here
specific internal logic block diagram of GTX is shown below, which consists of four transceiver channels GTXE2_CHANNEL primitives and a GTXE2_COMMON primitive. Each GTXE2_CHANNEL includes a transmitting circuit TX and a receiving circuit RX. The clock of GTXE2_CHANNEL can come from CPLL or QPLL, which can be configured in the IP configuration interface; "ug476_7Series_Transceivers" page 25;
insert image description here

The logic circuit of each GTXE2_CHANNEL is shown in the figure below: "ug476_7Series_Transceivers" page 26;
insert image description here
the function of the sending end and receiving end of GTXE2_CHANNEL is independent, and both are composed of PMA (Physical Media Attachment, physical media adaptation layer) and PCS (Physical Coding Sublayer , physical coding sublayer) consists of two sublayers. The PMA sublayer includes high-speed serial-to-parallel conversion (Serdes), pre-/post-emphasis, receiving equalization, clock generator and clock recovery circuits. The PCS sublayer includes circuits such as 8B/10B codec, buffer, channel bonding, and clock correction.
It doesn’t make much sense to say too much here, because if you haven’t done a few big projects, you won’t understand what’s inside. For first-time users or fast users, more energy should be focused on the calling and use of IP cores , I will also focus on the call and use of the IP core later;

GTX send and receive processing flow

First, the user logic data enters a sending buffer (Phase Adjust FIFO) after being encoded by 8B/10B. This buffer is mainly used for clock isolation between the two clock domains of the PMA sublayer and the PCS sublayer, and solves the clock rate matching and phase adjustment between the two. For the problem of difference, the parallel-to-serial conversion (PISO) is performed through high-speed Serdes. If necessary, pre-emphasis (TX Pre-emphasis) and post-emphasis can be performed. It is worth mentioning that if you accidentally cross-connect the TXP and TXN differential pins during PCB design, you can make up for this design error through polarity control (Polarity). The receiving end and the sending end process are opposite, and there are many similarities, so I won’t go into details here. It should be noted that the elastic buffer of the RX receiving end has clock correction and channel binding functions. Every function point here can write a paper or even a book, so here you only need to know a concept, and you can use it in a specific project, or the same sentence: For the first time or want to use it quickly For those who are concerned, more energy should be focused on the calling and use of IP cores.

Reference clock for GTX

The GTX module has two differential reference clock input pins (MGTREFCLK0P/N and MGTREFCLK1P/N), which can be selected by the user as the reference clock source of the GTX module. On the general A7 series development board, there is a 148.5Mhz GTX reference clock connected to MGTREFCLK0 as the reference clock of GTX. The differential reference clock is converted into a single-ended clock signal by the IBUFDS module and entered into the QPLL or CPLL of GTXE2_COMMOM to generate the required clock frequency in the TX and RX circuits. If the TX and RX transceiver speeds are the same, the TX circuit and the RX circuit can use the clock generated by the same PLL. If the TX and RX transceiver speeds are not the same, the clocks generated by different PLL clocks need to be used. Reference clock Here, the GT reference routine given by Xilinx has been done very well, and we don’t need to modify it when we call it; the reference clock structure diagram of GTX is as follows: "ug476_7Series_Transceivers" page 31;
insert image description here

GTX transmit interface

Pages 107 to 165 of "ug476_7Series_Transceivers" introduce the sending process in detail, and most of the content can be ignored by users, because the manual basically talks about his own design ideas, leaving the user's operable interface and Not many, based on this idea, we focus on the interface that needs to be used in the sending part left to the user when instantiating GTX;
insert image description here

Users only need to care about the clock and data of the sending interface. This part of the interface of the GTX instantiation module is as follows:
insert image description here
insert image description here
In the code, I have re-bound and made the top layer of the module for you. The code part is as follows:
insert image description here

GTX receiving interface

Pages 167 to 295 of "ug476_7Series_Transceivers" introduce the transmission process in detail, and most of the content can be ignored by users, because the manual basically talks about his own design ideas, leaving the user's operable interface and Not many, based on this idea, we will focus on the interfaces that the user needs to use in the sending part of the GTX instantiation; the
insert image description here
user only needs to care about the clock and data of the receiving interface. The interface of this part of the GTX instantiation module is as follows:
insert image description here
insert image description here
In the code, I have rebinded for you and made it to the top level of the module, the code part is as follows:
insert image description here

Call and use of GTX IP core

insert image description here
Different from the tutorials of other bloggers on the Internet, I personally like to use the shared logic as shown in the figure below:
insert image description here
There are two advantages of this choice, one is to facilitate DRP speed change, and the other is to facilitate the modification of the IP core. After modifying the IP core, compile it directly. But, it is no longer necessary to open the example project, and then copy a bunch of files below into your own project or something. Does it need to be so complicated to play a GTX?
insert image description here
Here is an explanation of the labels in the above picture:
1: Line rate. According to your own project requirements, the range of GTX is 0.5 to 10.3125G. Since my project is video transmission, it can be within the range of GTX rate. In this example 2
: Reference clock, this depends on your schematic diagram, it can be 80M, 125M, 148.5M, 156.25M, etc., my development board is 125M; 4: GTX group
binding, this is very Important, there are two references for his binding. It is your development board schematic diagram, but the official reference "ug476_7Series_Transceivers". The official GTX resources are divided into multiple groups according to different banks. Since the GT resources are Xilinx series FPGAs The dedicated resources of GTX occupy a dedicated Bnak, so the pins are also dedicated, so how do these GTX groups and pins correspond? "ug476_7Series_Transceivers" has instructions;
the schematic diagram of my board is as follows:
insert image description here
insert image description here
select the 8b/10b codec with an external data bit width of 32bit, as follows: the
insert image description here
following is K code detection: here
insert image description here
select K28.5, which is the so-called COM code, The hexadecimal system is bc, which has many functions. It can represent idle out-of-sequence symbols and data misalignment marks. It is used to mark data misalignment. The definition of K code in 8b/10b protocol is as follows: The following is clock
insert image description here
correction , that is, the elastic buffer corresponding to the internal receiving part of GTP;
insert image description here
Here is a concept of clock frequency offset, especially when the clocks of the sending and receiving parties are from different sources, the frequency offset set here is 100ppm, and it is stipulated that the sender sends a 4-byte sequence every 5000 data packets, and the elastic buffer of the receiver will be based on The 4-byte sequence and the position of the data in the buffer determine whether to delete or insert a byte in a 4-byte sequence, in order to ensure the stability of the data from the sender to the receiver and eliminate the influence of clock frequency offset ;

4. Design thinking framework

This blog provides 2 sets of vivado project source code. The difference between the 2 sets of projects is whether to use 1 SFP optical port for sending and receiving or two 2 SFP optical ports for sending and receiving; using 1 SFP optical port for sending and receiving is to connect the RX of SFP with optical fiber and TX; using 2 SFP optical ports for sending and receiving is to connect the RX of one SFP and the TX of another SFP with optical fiber; the framework of the design idea is as follows: the block diagram of using 2 SFP optical ports is as follows: the block diagram of using 1 SFP optical port is as
follows
insert image description here
:
insert image description here

Video source selection

There are two types of video sources, corresponding to whether the developer has a camera or not. If you have a camera in your hand, or if your development board has a camera interface, use the camera as the video input source. What I use here is a cheap The OV5640 camera module; if you don’t have a camera in your hand, or your development board has no camera interface, you can use the dynamic color bars generated inside the code to simulate camera video. The dynamic color bars are moving pictures, which can completely simulate video; default Use ov5640 as the video source; the selection of the video source is carried out through the ` define COLOR_IN macro definition at the top of the
insert image description here
code ; (No comment) When define COLOR_IN, the input source video is ov5640 camera;
insert image description here


OV5640 camera configuration and acquisition

The OV5640 camera requires i2c configuration before it can be used. It is necessary to collect the video data of the DVP interface as video data in RGB565 or RGB888 format. Both parts are implemented with verilog code modules. The code location is as follows: The camera is configured with a resolution of 1280x720, as follows
insert image description here
:
insert image description here
Camera The acquisition module supports video output in RGB565 and RGB888 formats, which can be configured by parameters, as follows:
insert image description here
RGB_TYPE=0 outputs the original RGB565 format;
RGB_TYPE=1 outputs the original RGB888 format;
the design selects the RGB565 format;

dynamic color bar

The dynamic color bar can be configured as videos with different resolutions. The border width of the video, the size of the dynamic moving square, and the moving speed can all be parameterized. I configure the resolution here as 1280x720, the code position of the dynamic color bar module and the top-level interface and Instantiated as follows:
insert image description here
insert image description here
insert image description here

video packet

Since the video needs to be sent and received through the aurora 8b/10b protocol in GTP, the data must be packaged to adapt to the aurora 8b/10b protocol standard; the code position of the video data package module is as follows: first, we store the 16bit video in the
insert image description here
FIFO , when a line is full, it is read from the FIFO and sent to GTX for transmission; before that, a frame of video needs to be numbered, which is also called an instruction. When GTX packs, it sends data according to a fixed instruction. The command restores the video field synchronization signal and video effective signal; when the rising edge of a frame of video field synchronization signal arrives, send a frame of video start instruction 0, and when the falling edge of a frame of video field synchronization signal arrives, send a frame Video start command 1, send invalid data 0 and invalid data 1 during the video blanking period, number each line of video when the video valid signal arrives, first send a line of video start command, and then send the current video line number, when a line of video is sent After completion, send a line of video end command. After sending a frame of video, first send a frame of video end command 0, and then send a frame of video end command 1; so far, a frame of video is sent. This module is not easy to understand. So I made detailed Chinese comments in the code. It should be noted that in order to prevent the disordered display of Chinese comments, please open the code with notepad++ editor; the command definition is as follows: the command can be changed arbitrarily, but the lowest byte must be
insert image description here
bc ;

GTX aurora 8b/10b

This is to call GTX to do the data encoding and decoding of the aurora 8b/10b protocol. I have already made a detailed overview of GTX before, so I won’t talk about it here; the code location is as follows:
insert image description here

data alignment

Since the aurora 8b/10b data transmission and reception of GT resources naturally has data misalignment, it is necessary to perform data alignment processing on the received decoded data. The code position of the data alignment module is as follows: The K code control character format I defined is: XX_XX_XX_BC, so
insert image description here
use One rx_ctrl indicates whether the data is a K-code COM symbol;
rx_ctrl = 4'b0000 indicates that the 4-byte data has no COM code;
rx_ctrl = 4'b0001 indicates that [7: 0] in the 4-byte data is a COM code;
rx_ctrl = 4'b0010 means [15: 8] in the 4-byte data is the COM code;
rx_ctrl = 4'b0100 means the [23:16] in the 4-byte data is the COM code;
rx_ctrl = 4'b1000 means the 4-byte [31:24] in the data is the COM code;
based on this, when the K code is received, the data will be aligned, that is, the data will be patted and combined with the new incoming data. This is the basis of FPGA Operation, no more details here;

Video data unpacking

Data unpacking is the reverse process of data packet packaging, and the code position is as follows:
insert image description here
When GTX unpacks, it restores the video field synchronization signal and video effective signal according to fixed instructions; these signals are important signals for the subsequent image cache;
so far, the data enters and exits GTX The part has been finished, and the block diagram of the whole process is described in the code, as follows:
insert image description here

image cache

Here I used the Zynq7100 development board, which is different from the routines of other bloggers on the whole network. They basically use the image cache architecture with VDMA as the core. It is certainly possible to do this, but you can’t see this lot of IP Source code, and it is very annoying to use, any wrong pin may cause the IP to not work, and these IPs need to be configured with the SDK, it is a headache for brothers who do not have the foundation of embedded C programming, I look up Xingkong, we are just playing happily. Why is it so difficult to play with FPGA?
For now, VDMA has the following inconveniences:

1: It is necessary to convert the video to AXI4-Stream. Whether you use fifo or use the official Video In to AXI4-Stream IP, it will undoubtedly increase resource consumption. It is not suitable for FPGAs with tight resources, and it will also increase The difficulty of FPGA development is daunting for those who are just getting started. Finally, the Video In to AXI4-Stream IP is also a black box, and it is too cumbersome to troubleshoot problems. 2: SDK configuration is required, and the SDK needs to be opened to run a VDMA
. Calling official library functions to do a lot of configuration is undoubtedly annoying. In addition, some brothers who do hardware have the same level of c language as me, so they can't handle embedded C at all. I just want to do some FPGA work with peace of mind. is it hard? Ha ha. . .
3: VDMA output also needs to call the two IPs of Video Time Controller and AXI4-Stream to Video Out to realize the conversion of AXI4-Stream video stream to VGA timing. Black-box operation, troubleshooting is too cumbersome;

Based on this, I don't use VDMA when I use zynq as image cache, but use FDMA. Using FDMA to replace VDMA in zynq has the following advantages:

1: There is no need to convert the input video to AXI4-Stream; resource saving, low development difficulty;
2: No SDK configuration, no need to know embedded C, good news for pure FPGA developers;
3: Visible source code, no There is a black box operation problem;

For how to use FDMA in zynq, please refer to my previous blog, blog address: click to go directly

video output

After the video is read from FDMA, it will be output to the display after passing through the VGA timing module and the HDMI sending module. The code position is as follows: The VGA timing
insert image description here
configuration is 1280X720, and the HDMI sending module is handwritten with verilog code, which can be used for FPGA HDMI sending applications. About this module, Please refer to my previous blog, blog address: click to go directly

5. vivado project 1–>2 SFP transmission

PL FPGA side design

Development board FPGA model: Xilinx–Zynq7100–xc7z100ffg900-2;
development environment: Vivado2019.1;
input: ov5640 camera or dynamic color bar, resolution 1280x720@60Hz;
output: HDMI display;
application: 2-way SFP optical port GTX aurora 8b /10b codec video transmission;
the project Block Design is as follows:
insert image description here
the project code structure is as follows:
insert image description here
FPGA resource consumption and power consumption estimation after comprehensive compilation is as follows:
insert image description here

PS side SDK side design

Since there is no VDMA IP that needs to be configured, the SDK only needs to configure the i2c software of OV5640. The SDK code structure is as follows: In this way, the
insert image description here
main function of the SDK becomes particularly streamlined, as follows:
insert image description here

6. vivado project 2–>1 channel SFP transmission

PL FPGA side design

Development board FPGA model: Xilinx–Zynq7100–xc7z100ffg900-2;
development environment: Vivado2019.1;
input: ov5640 camera or dynamic color bar, resolution 1280x720@60Hz;
output: HDMI display;
application: 1 channel SFP optical port GTX aurora 8b /10b codec video transmission;
the project Block Design is as follows:
insert image description here
the project code structure is as follows:
insert image description here
FPGA resource consumption and power consumption estimation after comprehensive compilation is as follows:
insert image description here

PS side SDK side design

Since there is no VDMA IP that needs to be configured, the SDK only needs to configure the i2c software of OV5640. The SDK code structure is as follows: In this way, the
insert image description here
main function of the SDK becomes particularly streamlined, as follows:
insert image description here

7. Board debugging and verification

fiber optic connection

Project 1: The fiber connection method for 2-way SFP transmission is as follows:
insert image description here
Project 2: The fiber connection method for 1-way SFP transmission is as follows:
insert image description here

static demo

The following takes project 1: 2-way SFP transmission as an example to show the output effect of OV5640 camera:
when GTX runs at 5G line rate, the output is as follows:
insert image description here
The following takes project 1: 2-way SFP transmission as an example to show the output effect of dynamic color bars:
when GTX runs 5G line The output at the rate is as follows:
insert image description here

dynamic presentation

The following takes project 1: 2-way SFP transmission as an example to show the demonstration video of OV5640 camera output effect:

Zynq7100-GTX-OV5640


The following takes project 1: 2-way SFP transmission as an example to show the demonstration video of the dynamic color bar output effect:

Zynq7100-GTX-COLOR

8. Benefits: Acquisition of engineering codes

Benefits: Obtaining the engineering code
The code is too large to be sent by email, and it is sent by a certain degree network disk link. The
method of data acquisition: private, or the V business card at the end of the article.
The network disk information is as follows:
insert image description here

Guess you like

Origin blog.csdn.net/qq_41667729/article/details/132516283