Table of contents
2. Graphics rendering pipeline
2.1 Overview of the rendering pipeline
foreword
Although the entire process of the graphics rendering pipeline has been implemented in C language, some theories are still not familiar enough, so here is a summary of the theory of interview questions
1. GPU rendering process
All rendering is the process of transferring data from the CPU to the GPU.
Rendering from a GPU perspective is pretty straightforward. For the overall framework of the program, it is roughly divided into the following steps:
The application calls the graphics API (opengl/dx12).
API calls to the GPU driver.
The GPU driver is responsible for translating graphics API functions into GPU-readable code.
The CPU transfers the Data in the memory to the GPU.
At this point, the GPU has the data and program code, can execute it, and render the image to the screen.
2. Graphics rendering pipeline
2.1 Overview of the rendering pipeline
- Application stage: The stage running on the CPU, generally used for input operation processing, animation processing, event processing, and so on.
- Geometry stage: Responsible for vertex-by-vertex and primitive-by-primitive operations.
- Rasterization: The operation of drawing pixel by pixel based on transformed projected vertices and shading information . Convert the 2D Point in the screen to the pixel on the screen.
2.2 Overall process
Application stage --> geometry stage --> rasterization stage --> piece by piece source stage --> post-processing
2.3 Application stage
Set the basic data of the scene: collect the size of the model, position and rotation information, light source type and lighting information, camera parameters
Coarse-grained culling and algorithm acceleration: culling object information that does not need to be sent to the geometry stage Example: frustum culling and occlusion culling
settings Rendering state, ready to render parameters: whether the rendering type is from back to front or from front to back, etc.
Call Draw Call: CPU notifies GPU to start rendering
2.4 Geometry stage
1. Input of vertex data
2. Vertex shader (world transformation matrix)
Vertex transformation: model coordinate system --> world coordinate system --> observation coordinate system --> projected coordinate system commonly known as: MVP matrix
Vertex coloring: Calculate data such as vertex lighting and set vertex color
3. Tessellation (dividing a triangle into multiple triangles, optional)
4. Geometry shader (optional for splitting points into lines or polygons)
5. Assembly of primitives
Projection: Conversion to normalized device coordinates based on camera type (orthographic or perspective)
Cropping: Exceeding the screen without rendering
Culling: Points on the back side are not rendered
Screen mapping: a projected coordinate system from -1 to 1, mapped to the coordinate position of the official screen
2.5 Rasterization stage
6. Rasterization
Calculate the color information of the fragment by interpolation method
2.6 Source stage by piece
7. Fragment shader Determines the final color of the screen and realizes the coloring operation of each fragment
8. Mixing and testing
- transparency test
- Stencil testing and depth testing
- Blends fragments that pass the test with the color of the color buffer
9. Output to the target buffer: it may be a frame buffer (FrameBuffer) or a rendering texture (RenderTexture)
2.7 Post-processing
- Various on-screen effects: depth of field, bloom, Gaussian blur, etc.