Unity Shader study notes - rendering pipeline

        The rendering pipeline can be divided into three conceptual stages: application stage, geometry stage, and rasterization stage.

Application phase: Usually implemented on the CPU, mainly controlled by developers, the three main tasks performed are:     

        1. Prepare the scene data, such as the position of the camera, which light sources are used

        2. Coarse-grained culling, removing those invisible objects, does not need to be handed over to the geometry stage for processing

        3. Set the rendering state of each model, and output the geometric information required for rendering (ie, rendering primitives)

Geometry stage: usually runs on the GPU, processes all the information related to the geometry we want to draw, and performs vertex-by-vertex and polygon-by-polygon operations. An important task in the geometry stage is to transform the vertex coordinates into the screen space, and then hand it over to the rasterizer for processing.

Rasterization stage: Also usually run on the GPU, the data passed through the previous stage produces pixels on the screen and renders the final image.

        The pipeline here is a conceptual pipeline, which is only the basic function division of the rendering process. The GPU pipeline to be introduced below is the pipeline used by the hardware to realize the above concepts.

        The starting point of the rendering pipeline is the CPU, that is, the application stage, which is roughly divided into the following three stages:

        1. Load the data into the video memory (hard disk > system memory > video memory, so the GPU can quickly access these data during rendering)

        2. Set the rendering state (define how the grid in the scene is rendered, including vertex shader, fragment shader, light source properties and materials)

        3. Call drawcall (drawcall is actually a call from CPU to GPU, and the optimization of drawcall is one of the key points of performance optimization)


        After calling drawcall, we come to the GPU rendering process. Although we cannot fully control the implementation details of this part, the GPU still opens up a lot of control rights to us:

Geometry stage:

        Vertex shader : It is fully programmable, calculates the color of the vertex, and transforms the coordinates. (Simulate the water surface by changing the position of the vertices)

        o.pos = mul ( UNIYT_MVP , v.position );

        The function of this line of code is to transform the vertex coordinates into the homogeneous clipping coordinate system.

        Tessellation Shader : An optional shader used to subdivide primitives.

        Geometry Shader: An optional shader that performs primitive-by-primitive shading operations, or is used to generate more primitives.

        Cropping: Configurable stages. Clip those vertices that are not in the camera's field of view, and cull the faces of some triangular primitives.

        There may be three situations in the relationship between a primitive and the camera's field of view: completely within the camera's field of view, partially within the field of view, and completely outside the field of view. If it is completely in the field of view, it will continue to be passed to the next pipeline stage. If it is completely out of the field of view, it will not continue to be passed down, and if part of it is in the field of view, it needs to be cropped, and a new vertex will be used at the junction of the field of view. Replaces the vertices of the field of view.

       Screen mapping: convert the coordinates of each primitive into the screen coordinate system, and convert the screen coordinates by ratio.

        The screen coordinates obtained by screen mapping determine which pixel on the screen this vertex corresponds to and how far it is from this pixel.

         If the resulting screen is found to be inverted, it may be due to the difference in the screen coordinate systems of OpenGL and DirectX.

Rasterization stage:

        Both triangle setup and triangle traversal are stages of fixed functions.

        Fragment Shader: Fully programmable, shader operation for each visible fragment.

        Texture Sampling: Converts texture pixels to pixels.

        Fragment-by-Piece operation: It is not programmable, but highly configurable. Responsible for modifying color, depth buffer, blending, etc. (It is the three tests and Blend process written in the previous rendering process)

        1. Carry out testing work, such as Alpha testing, stencil testing (can be used for rendering shadows, outline rendering), depth testing, and discard the failed fragments.

        2. If a fragment passes all tests, merge and mix the color value of this fragment with the color already stored in the color buffer. (Opaque objects can be realized by turning off the Blend operation, which will directly overwrite the pixel value of the color buffer; if it is a translucent object, we need to use the blending operation to make the object look transparent)

        Logically speaking, these tests are performed after the fragment shader, but for most GPUs, in order not to let the calculation of the fragment color fail the test and be discarded, these tests will be performed as much as possible before the fragment shader is executed. Test, this technology that advances the depth test is called Early-Z technology,

        After passing the test, it will be displayed on our screen, but in order to prevent us from seeing the primitives that are being rasterized, the GPU will use a double buffering strategy, and the rendering of the scene occurs behind the scenes, that is, the back buffer, Once the scene has been rendered into the back buffer, the GPU swaps the contents of the back buffer with the front buffer, which is the image that was previously displayed on the screen.

Guess you like

Origin blog.csdn.net/weixin_45081191/article/details/129161550
Recommended