Unity Shader rendering pipeline

What is a rendering pipeline

The task of the rendering pipeline is to generate (render) a 2D image starting from a 3D scene.
This process is divided into three phases: application phase, geometry phase, rasterization phase

  1. Application phase:
    (1) Output rendering primitives - points, lines, triangles, etc.
    (2) The developer prepares scene data: camera position, viewing frustum, models contained in the scene, and which light sources are used
    .
    (3) The developer does a coarse-grained culling job, culling invisible objects and not handing them over to the geometry stage for processing
    (4) Set the rendering state of each model (material used (diffuse reflection color, specular reflection color), texture used, shader used)
  2. Geometry stage:
    (1) Output the vertex information of the screen
    (2) Process all the geometry related things we want to draw (determine what primitives need to be drawn, how to draw them, where to draw)
  3. Rasterization stage:
    (1) Use the data from the previous stage to illustrate the pixels on the screen and render the final image.

Communication between CPU and GPU

  1. Load data into video memory
    (1) load from hard disk to memory
    (2) data such as network and texture are loaded into the storage space (video memory) on the graphics card

  2. Set the rendering state
    (1) to define how the mesh in the scene is rendered

  3. Call Draw Call
    (1) Draw Call is a command, the initiator is the CPU, the receiver is the GPU, it only points to a list of primitives that need to be rendered, and does not contain any material information

GPU pipeline

  1. Overview
    (1) The geometry stage and the rasterization stage can be divided into several smaller pipeline stages, which are implemented by the GPU, and each stage GPU provides different configurability and programmability (2) Vertex
    data—— Input
    (3) Vertex shader - realize the space transformation of vertices, vertex coloring and other functions
    (4) Surface subdivision shader - used to subdivide primitives
    (5) Geometry shader - used to perform primitive-by-primitive Shading operation, or used to produce more primitives
    (6) clipping - to crop vertices that are not in the camera's field of view, or to generate more primitives
    (7) screen mapping - responsible for the coordinates of each primitive Converting to the screen coordinate system
    (8) Triangle traversal - fixed function stage
    (9) Triangle setting - fixed function stage
    (10) Fragment shader - time-per-fragment coloring operation
    (11) Fragment-by-fragment operation - — modify color, depth buffer, blend
    (12) screen image

  2. Vertex shader
    (1) Each vertex input will call the vertex shader
    (2) work - coordinate transformation, vertex-by-vertex lighting
    (3) change the vertex coordinates from the model space to the homogeneous clipping space

  3. Cropping
    (1) Dealing with objects that are not within the range of the camera
    (2) Three relationships: ① completely in the field of view - pass the primitive to the next pipeline, ② partly in the field of view - replace it with a new vertex at the junction of the field of view , ③ completely outside the field of view - will not transfer primitives

  4. Screen mapping
    (1) Convert the x and y coordinates of each primitive to the screen coordinate system
    (2) The screen coordinate system and the z coordinate together form a coordinate system called the window coordinate system.

  5. Triangle Settings - calculates the information needed to rasterize a triangle network

  6. Triangle traversal - checks whether each pixel is covered by a triangle network, and if so, generates a fragment
    insert image description here

The pixels are grids, and the red shades represent the generated fragments. If the part in the grid is relatively small, no fragments will
be
generated
. Visibility of a fragment - stencil test, depth test
Stencil test: compare the value of the fragment in the stencil buffer with the reference value, and limit the rendering area
Depth test: compare the depth value of the fragment with the depth value that already exists Comparing the depth values ​​in the buffer
(3) Mixing colors - mixing - turning on will merge the fragment color and buffer color, and turning it off will directly overwrite
(4) Double buffering: ① exchange post buffer and the content of the front buffer, ② the back buffer is rasterized, and ③ the front buffer is displayed

Guess you like

Origin blog.csdn.net/weixin_50617270/article/details/123360514
Recommended