The most detailed explanation of the whole network of FPGA GTX, aurora 8b/10b protocol, HDMI video transmission, providing 2 sets of engineering source code and technical support

1 Introduction

I am ashamed to say that I have played FPGA before if I have never played GT resources. This is a sentence said by a big guy at CSDN, and I firmly believe it. . .
GT resources are an important selling point of Xilinx series FPGAs, and they are also the basis for high-speed interfaces. Whether it is PCIE, SATA, MAC, etc., GT resources are required for high-speed serialization and deserialization of data. Different FPGA series of Xilinx have different GT resource type, the low-end A7 has GTP, K7 has GTX, V7 has GTH, and the higher-end U+ series has GTY, etc. Their speed is getting higher and higher, and their application scenarios are becoming more and more high-end. . .

This article uses the GTX resources of Xilinx’s Kintex7 FPGA for video transmission experiments. There are two types of video sources, which correspond to whether the developer has a camera or not. One is to use a laptop to simulate HDMI video, and the IT6802 decodes the input HDMI to GRB. For FPGA use; if you don’t have a camera in your hand, or your development board doesn’t have an HDMI input interface, you can use the dynamic color bars generated inside the code to simulate the camera video; the video source is selected through the `define macro definition at the top of the code, the default Use HDMI input as the video source; call the GTX IP core, use verilog to write the video data codec module and data alignment module, and use the two SFP optical ports on the development board hardware to realize data transmission and reception; this blog provides two sets of vivado project source code , the difference between the two sets of projects is whether one SFP optical port is used for sending and receiving or two SFP optical ports are used for sending and receiving; this blog describes the design scheme of FPGA GTX video transmission in detail. Direct project transplantation is suitable for students and postgraduate project development, and also for in-service engineers to improve their learning. It can be applied to high-speed interfaces or image processing fields in medical, military and other industries; provide complete and smooth engineering source code and
technology Support;
the way to obtain project source code and technical support is at the end of the article, please be patient to read to the end;

disclaimer

This project and its source code are partly written by myself, and partly obtained from public channels on the Internet (including CSDN, Xilinx official website, Altera official website, etc.). The project and its source code are limited to the personal study and research of readers or fans, and commercial use is prohibited. If the legal issues caused by the commercial use of readers or fans for their own reasons have nothing to do with this blog and the blogger, please use it with caution. . .

2. The GT high-speed interface solution I have here

My homepage has a FPGA GT high-speed interface column, which includes video transmission routines and PCIE transmission routines for GT resources such as GTP, GTX, GTH, and GTY, among which GTP is based on A7 series FPGA development boards, and GTX is based on K7 or ZYNQ series FPGA development board is built, GTH is built based on KU or V7 series FPGA development board, GTY is built based on KU+ series FPGA development board; the following is the column address:
click to go directly

3. The most detailed interpretation of GTX network

The most detailed introduction to GTX must be Xilinx's official "ug476_7Series_Transceivers", we use this to interpret:
I have put the PDF document of "ug476_7Series_Transceivers" in the data package, and there is a way to obtain it at the end of the article; the
FPGA model of the development board I used It is Xilinx Kintex7 xc7k325tffg676-2; it has 8 channels of GTX resources, 2 of which are connected to 2 SFP optical ports, and the sending and receiving speed of each channel is between 500 Mb/s and 10.3125 Gb/s. GTX transceivers support different serial transmission interfaces or protocols, such as PCIE 1.1/2.0 interface, 10 Gigabit network XUAI interface, OC-48, serial RapidIO interface, SATA (Serial ATA) interface, digital component serial interface (SDI) etc;

GTX basic structure

Xilinx uses Quad to group serial high-speed transceivers. Four serial high-speed transceivers and a COMMOM (QPLL) form a Quad. Each serial high-speed transceiver is called a Channel (channel). The figure below shows four The schematic diagram of the GTX transceiver in the Kintex7 FPGA chip: "ug476_7Series_Transceivers" page 24; the
insert image description here
specific internal logic block diagram of GTX is shown below, which consists of four transceiver channels GTXE2_CHANNEL primitives and a GTXE2_COMMON primitive. Each GTXE2_CHANNEL includes a transmitting circuit TX and a receiving circuit RX. The clock of GTXE2_CHANNEL can come from CPLL or QPLL, which can be configured in the IP configuration interface; "ug476_7Series_Transceivers" page 25;
insert image description here

The logic circuit of each GTXE2_CHANNEL is shown in the figure below: "ug476_7Series_Transceivers" page 26;
insert image description here
the function of the sending end and receiving end of GTXE2_CHANNEL is independent, and both are composed of PMA (Physical Media Attachment, physical media adaptation layer) and PCS (Physical Coding Sublayer , physical coding sublayer) consists of two sublayers. The PMA sublayer includes high-speed serial-to-parallel conversion (Serdes), pre-/post-emphasis, receiving equalization, clock generator and clock recovery circuits. The PCS sublayer includes circuits such as 8B/10B codec, buffer, channel bonding and clock correction.
It doesn’t make much sense to say too much here, because if you haven’t done a few big projects, you won’t understand what’s inside. For first-time users or fast users, more energy should be focused on the calling and use of IP cores , I will also focus on the call and use of the IP core later;

GTX send and receive processing flow

First, the user logic data enters a sending buffer (Phase Adjust FIFO) after being encoded by 8B/10B. This buffer is mainly used for clock isolation between the two clock domains of the PMA sublayer and the PCS sublayer, and solves the clock rate matching and phase adjustment between the two. For the problem of difference, the parallel-to-serial conversion (PISO) is performed through high-speed Serdes. If necessary, pre-emphasis (TX Pre-emphasis) and post-emphasis can be performed. It is worth mentioning that if you accidentally cross-connect the TXP and TXN differential pins during PCB design, you can make up for this design error through polarity control (Polarity). The receiving end and the sending end process are opposite, and there are many similarities, so I won’t go into details here. It should be noted that the elastic buffer of the RX receiving end has clock correction and channel binding functions. Every function point here can write a paper or even a book, so here you only need to know a concept, and you can use it in a specific project, or the same sentence: For the first time or want to use it quickly For those who are concerned, more energy should be focused on the calling and use of IP cores.

Reference clock for GTX

The GTX module has two differential reference clock input pins (MGTREFCLK0P/N and MGTREFCLK1P/N), which can be selected by the user as the reference clock source of the GTX module. On the general A7 series development board, there is a 148.5Mhz GTX reference clock connected to MGTREFCLK0 as the reference clock of GTX. The differential reference clock is converted into a single-ended clock signal by the IBUFDS module and entered into the QPLL or CPLL of GTXE2_COMMOM to generate the required clock frequency in the TX and RX circuits. If the TX and RX transceiver speeds are the same, the TX circuit and the RX circuit can use the clock generated by the same PLL. If the TX and RX transceiver speeds are not the same, the clocks generated by different PLL clocks need to be used. Reference clock The GT reference routine given by Xilinx has been done very well, and we don’t need to modify it when we call it; the reference clock structure of GTX is as follows: "ug476_7Series_Transceivers" page 31;
insert image description here

GTX transmit interface

Pages 107 to 165 of "ug476_7Series_Transceivers" introduce the sending process in detail, and most of the content can be ignored by users, because the manual basically talks about his own design ideas, leaving the user's operable interface and Not many, based on this idea, we focus on the interface that needs to be used in the sending part left to the user when instantiating GTX;
insert image description here

Users only need to care about the clock and data of the sending interface. This part of the interface of the GTX instantiation module is as follows:
insert image description here
insert image description here
In the code, I have re-bound and made the top layer of the module for you. The code part is as follows:
insert image description here

GTX receiving interface

Pages 167 to 295 of "ug476_7Series_Transceivers" introduce the transmission process in detail, and most of the content can be ignored by users, because the manual basically talks about his own design ideas, leaving the user's operable interface and Not many, based on this idea, we will focus on the interfaces that the user needs to use in the sending part of the GTX instantiation; the
insert image description here
user only needs to care about the clock and data of the receiving interface. The interface of this part of the GTX instantiation module is as follows:
insert image description here
insert image description here
In the code, I have rebinded for you and made it to the top level of the module, the code part is as follows:
insert image description here

Call and use of GTX IP core

insert image description here
insert image description here
insert image description here
Here is an explanation of the labels in the above picture:
1: Line rate. According to your own project requirements, the range of GTX is 0.5 to 10.3125G. Since my project is video transmission, it can be within the range of GTX rate. In this example 2: Reference
clock, this depends on your schematic diagram, it can be 80M, 125M, 148.5M, 156.25M, etc. My development board is 148.5M; 4:
GTX group binding, This is very important. There are two references for his binding. It is your development board schematic diagram, but the official reference "ug476_7Series_Transceivers". The official GTX resources are divided into multiple groups according to different banks, because the GT resources are Xilinx The dedicated resources of the series FPGA occupy a dedicated Bnak, so the pins are also dedicated, so how do these GTX groups and pins correspond? The description of "ug476_7Series_Transceivers" is as follows: the red box is the FPGA pin corresponding to the schematic diagram of my development board; the
insert image description here
schematic diagram of my board is as follows:
insert image description here
insert image description here

Select the 8b/10b codec with an external data bit width of 32 bits, as follows: The
insert image description here
following is about K code detection: here
insert image description here
choose K28.5, which is the so-called COM code, and the hexadecimal system is bc. It has many functions and can be Indicates an idle out-of-sequence symbol, and can also indicate a data misalignment flag, which is used to indicate data misalignment. The 8b/10b protocol defines the K code as follows: The following is about clock correction, which is the elastic buffer corresponding to the internal
insert image description here
receiving part of GTP;
insert image description here
here There is a concept of clock frequency offset, especially when the clocks of the sending and receiving parties are from different sources, the frequency offset set here is 100ppm, and it is stipulated that the sender sends a 4-byte sequence every 5000 data packets, and the elastic buffer of the receiver will be based on this The 4-byte sequence and the position of the data in the buffer determine whether to delete or insert a byte in a 4-byte sequence, in order to ensure the stability of the data from the sender to the receiver and eliminate the influence of clock frequency offset;

4. Design thinking framework

This blog provides 2 sets of vivado project source code. The difference between the 2 sets of projects is whether to use 1 SFP optical port for sending and receiving or two 2 SFP optical ports for sending and receiving; using 1 SFP optical port for sending and receiving is to connect the RX of SFP with optical fiber and TX; using 2 SFP optical ports for sending and receiving is to connect the RX of one SFP and the TX of another SFP with optical fiber; the framework of the design idea is as follows: the block diagram of using 2 SFP optical ports is as follows: the block diagram of using 1 SFP optical port is as
follows
insert image description here
:
insert image description here

Video source selection

There are two types of video sources, corresponding to whether the developer has a camera or not. If you have a camera in your hand, or your development board has an HDMI input interface, use the HDMI input as the video input source. I use it here It is a notebook to simulate HDMI video, IT6802 decoding chip decodes HDMI; if you do not have a camera in your hand, or your development board does not have an HDMI input interface, you can use the dynamic color bars generated inside the code to simulate camera video, and the dynamic color bars are moving pictures , can completely simulate video; the default is to use HDMI input as the video source; the
insert image description here
selection of the video source is carried out through the `define macro definition at the top level of the
insert image description here
code
; The video is a dynamic color bar;
when (not commenting) define USE_SENSOR, the input source video is HDMI input;

IT6802 decoding chip configuration and collection

The IT6802 decoding chip requires i2c configuration to be used. For the configuration and use of the IT6802 decoding chip, please refer to my previous blog. Blog address: Click to go directly to the IT6802 decoding chip configuration and acquisition. Both parts are implemented with verilog code modules,
and the code location As follows:
insert image description here
1920x1080 resolution is configured in the code;

dynamic color bar

The dynamic color bar can be configured as videos with different resolutions. The border width of the video, the size of the dynamic moving square, and the moving speed can all be parameterized. I configure the resolution here as 1920x1080, the code position of the dynamic color bar module and the top-level interface and Instantiated as follows:
insert image description here
insert image description here

video packet

Since the video needs to be sent and received through the aurora 8b/10b protocol in GTX, the data must be packaged to adapt to the aurora 8b/10b protocol standard; the code position of the video data package module is as follows: first, we store the 16bit video in the
insert image description here
FIFO , when a line is full, it will be read from the FIFO and sent to GTX for transmission; before that, a frame of video needs to be numbered, which is also called an instruction. When GTX packs, it sends data according to a fixed instruction. The command restores the video field synchronization signal and video effective signal; when the rising edge of a frame of video field synchronization signal arrives, send a frame of video start instruction 0, and when the falling edge of a frame of video field synchronization signal arrives, send a frame Video start command 1, send invalid data 0 and invalid data 1 during the video blanking period, number each line of video when the video valid signal arrives, first send a line of video start command, and then send the current video line number, when a line of video is sent After completion, send a line of video end command. After sending a frame of video, first send a frame of video end command 0, and then send a frame of video end command 1; so far, a frame of video is sent. This module is not easy to understand. So I made detailed Chinese comments in the code. It should be noted that in order to prevent the Chinese comments from being displayed out of order, please use the notepad++ editor to open the code; the instructions are defined as follows: instructions can be changed arbitrarily, but the lowest byte must be
insert image description here
bc ;

GTX aurora 8b/10b

This is to call GTX to do the data encoding and decoding of the aurora 8b/10b protocol. I have already made a detailed overview of GTX before, so I won’t talk about it here; the code location is as follows:
insert image description here

data alignment

Since the aurora 8b/10b data transmission and reception of GT resources naturally has data misalignment, it is necessary to perform data alignment processing on the received decoded data. The code position of the data alignment module is as follows: The K code control character format I defined is: XX_XX_XX_BC, so
insert image description here
use One rx_ctrl indicates whether the data is a K-code COM symbol;
rx_ctrl = 4'b0000 indicates that the 4-byte data has no COM code;
rx_ctrl = 4'b0001 indicates that [7: 0] in the 4-byte data is a COM code;
rx_ctrl = 4'b0010 means [15: 8] in the 4-byte data is the COM code;
rx_ctrl = 4'b0100 means the [23:16] in the 4-byte data is the COM code;
rx_ctrl = 4'b1000 means the 4-byte [31:24] in the data is the COM code;
based on this, when the K code is received, the data will be aligned, that is, the data will be patted, and the new data will be misplaced and combined. This is the basis of FPGA Operation, no more details here;

Video data unpacking

Data unpacking is the reverse process of data packet packaging, and the code position is as follows:
insert image description here
When GTX unpacks, it restores the video field synchronization signal and video effective signal according to fixed instructions; these signals are important signals for the subsequent image cache;
so far, the data enters and exits GTX The part has been finished, and the block diagram of the whole process is described in the code, as follows:
insert image description here

image cache

Old fans who often read my blog should know that my routine for image caching is FDMA. Its function is to send images to DDR for 3 frame buffers and then read them out for display. The purpose is to match the clock difference between input and output and improve output. Video quality, about FDMA, please refer to my previous blog, blog address: click to go directly

video output

After the video is read from FDMA, it will be output to the display after passing through the VGA timing module and the HDMI sending module. The code position is as follows: The VGA timing
insert image description here
configuration is 1920X1080, and the HDMI sending module is handwritten with verilog code, which can be used for FPGA HDMI sending applications. About this module, Please refer to my previous blog, blog address: click to go directly

5. vivado project 1–>2 SFP transmission

Development board FPGA model: Xilinx–Kintex7–xc7k325tffg676-2;
development environment: Vivado2019.1;
input: HDMI or dynamic color bars, resolution 1920x1080@60Hz;
output: HDMI display;
application: 2-way SFP optical port GTX aurora 8b/ 10b codec video transmission;
the project Block Design is as follows:
insert image description here
the project code structure is as follows:
insert image description here
FPGA resource consumption and power consumption estimation after comprehensive compilation is as follows:
insert image description here

6. vivado project 2–>1 channel SFP transmission

Development board FPGA model: Xilinx–Kintex7–xc7k325tffg676-2;
development environment: Vivado2019.1;
input: HDMI or dynamic color bars, resolution 1920x1080@60Hz;
output: HDMI display;
application: 1 channel SFP optical port GTP aurora 8b/ 10b codec video transmission;
the project Block Design is as follows:
insert image description here
the project code structure is as follows:
insert image description here
FPGA resource consumption and power consumption estimation after comprehensive compilation is as follows:
insert image description here

7. Board debugging and verification

fiber optic connection

Project 1: The fiber connection method for 2-way SFP transmission is as follows:
insert image description here
Project 2: The fiber connection method for 1-way SFP transmission is as follows:
insert image description here

static demo

The following takes project 1: 2-way SFP transmission as an example to show the output effect after HDMI input:
when GTX runs at a line rate of 5.94G, the output is as follows:
insert image description here
The following takes project 1: 2-way SFP transmission as an example to show the output effect of dynamic color bars:
when GTX The output when running at 5.94G line rate is as follows:
insert image description here

dynamic presentation

The following takes project 1: 2-way SFP transmission as an example to show the output effect demonstration video after HDMI input:

K7-GTX-HDMI-LOOP


The following takes project 1: 2-way SFP transmission as an example to show the demonstration video of the dynamic color bar output effect:

K7-GTX-HDMI-COLOR

8. Benefits: Acquisition of engineering codes

Benefits: Obtaining the engineering code
The code is too large to be sent by email, and it is sent by a certain degree network disk link. The
method of data acquisition: private, or the V business card at the end of the article.
The network disk information is as follows:
insert image description here

Guess you like

Origin blog.csdn.net/qq_41667729/article/details/132471967