Graphics pipeline basics (1)

Graphics pipeline basics (1)


提示:写完文章后,目录可以自动生成,如何生成可参考右边的帮助文档


foreword

提示:这里可以添加本文要记录的大概内容:

This article summarizes the previous learning of the graphics pipeline


1. Basic concepts of Opengl

OpenGL (English: Open Graphics Library, translated name: Open Graphics Library or "Open Graphics Library") is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. Opengl originated from the Silicon Valley company (Silicon Graphics Inc., SGI) and its IRIS GL. GL stands for "Graphics Library".
The OpenGL specification is maintained by the OpenGL Architecture Review Board (ARB), established in 1992. The ARB is made up of companies with a particular interest in creating a unified, generally available API. According to the OpenGL official website, ARB voting members in June 2002 included 3Dlabs, Apple Computer, ATI Technologies, Dell Computer, Evans & Sutherland, Hewlett-Packard, IBM, Intel, Matrox, NVIDIA, SGI, and Sun Microsystems. One of the members, but withdrew in March 2003.
The purpose of OpenGL is to provide an abstraction layer between the application program and the graphics subsystem of the underlying graphics, which usually has a hardware accelerator consisting of one or more custom, high-performance processors, with dedicated memory, display output, etc. . This abstraction layer can save programs from having to know who made the graphics processor. Opengl's design principle is to strike a balance between too high and too low an abstraction layer.
The current GPU is composed of a large number of small programmable processors. These processors are called shader cores, and the mini-programs they run are called shaders. A simplified schematic diagram of the graphics pipeline is shown below
insert image description here
The classic is never out of date. This picture covers the graphics pipeline of OpenGL in detail. The blue blocks indicate that various buffers come from the Opengl pipeline, the green blocks indicate the non-programmable stage of fixed functions, and the yellow blocks indicate the programmable stage. The T and B sub-tables represent texture binding and Buffer binding. These two are the main data storage forms of Opengl, corresponding to the buffer and texture respectively, which will be shown in detail later. This chapter will mainly talk about the middle part, the upper right corner is the computer shader, and a separate chapter will be opened later.
In Opengl, the basic rendering unit is called a primitive. Opengl supports a variety of primitives, but the three renderable primitives are points, lines, and triangles. Everything we see rendered on the screen is a collection of lines and triangles. The application generally decomposes the complex surface into many triangles, and then sends it to OpenGL for rendering through the hardware accelerator of the rasterizer. A rasterizer is specialized hardware that converts a three-dimensional form of triangles into a series of pixels that need to be rendered on the screen. For 3D coordinate systems, the graphics pipeline is split into two main parts. The first part is the front end (front end) processing vertices and primitives, and finally they are composed into points, lines and triangles and passed to the rasterizer. This process is called primitive assembly. After being processed by a rasterizer, the geometry has been transformed from being vector in nature into a large number of individual pixels. These are handled by the back end, including depth testing, stencil testing, fragment shading, blending, and updating the output image.
insert image description here

2. OpenGL pipeline process

1. Vertex shader

The vertex shader is the first programmable stage in the OpenGL pipeline and the only required stage in the graphics pipeline. Before the vertex shader starts running, it runs a fixed-function stage called vertex fetching, also known as vertex pulling. This stage automatically provides input to the vertex shader. Data includes element array buffer, vertex buffer object, draw indirect buffer

2. Subdivision surface

Subdivision surface is the process of decomposing a high-order primitive (called a patch in OpenGL) into many smaller, simpler primitives for rendering. For example split into multiple triangles. Opengl includes a fixed-function, configurable subdivision surface engine. Multiple quads, triangles, and lines can be decomposed into a large number of smaller points, lines, or triangles that can be directly used by conventional rasterizer hardware in the pipeline. Logically speaking, the subdivision surface stage is located after the vertex rendering stage in the Opengl pipeline, and mainly consists of three parts: Tessellation Control Shader, Tessellation Primitive Gen and Subdivision Surface Tessellation Eval Shader

2-1. Subdivision surface control shader

The first of three subdivision surface stages is the subdivision surface control shader. This shader takes input data from the vertex shader and is mainly responsible for two tasks, determining the subdivision surface level to be sent to the subdivision surface engine, and generating data to send to the subdivision surface evaluation shader, which will appear in the Starts running when subdividing surfaces.
Subdivision surfaces in OpenGL work by decomposing high-order surfaces, so-called patches, into points, lines, and triangles. Each patch is composed of multiple control points.
When tessellation starts, the vertex shader runs once per control point, while the tessellation control shader runs on groups of control points in batches equal to the number of vertices per tile. That is, the vertices are used as control points, and the result of the vertex shader is passed in batches to the subdivision surface control shader as its input data. The number of control points per tile can be changed, the default number of control points per tile is 3, so the tessellation control shader outputs a different number of control points than it uses.

2-2. Subdivision surface engine

The subdivision surface engine is a fixed-function part of the Opengl pipeline, which mainly receives high-order surfaces represented by patches and decomposes these surfaces into simpler primitives. Before the subdiv engine accepts the tiles, the diff surface control shader processes the incoming control points and sets the subdiv factors used to decompose the tiles. After the subdiv engine generates output primitives, the subdiv evaluation shader gets the vertices representing those primitives. The subdivision surface engine is responsible for generating the parameters needed to call the subdivision surface evaluation shader, which then uses these parameters to transform the resulting primitives and prepare them for rasterization.

2-3. Subdivision surface evaluation shader

A fixed-function subdivision surface engine starts running with a large number of output vertices representing the primitives it generates. These vertices are passed to the subdivision surface evaluation shader. The subdivision surface evaluation shader runs a call for each vertex produced by the subdivision surface unit. If the tessellation level is high, the tessellation evaluation shader will be run many times. Its job is to assign values ​​to the generated vertices just like a vertex shader.

2-4. Application in Unity Shader

Unity Shader introduces #include "Tessellation.cginc" file
Tessellation.cginc provides us with three surface shading functions
UnityDistanceBasedTess() Generates subdivision factors based on the distance from the camera
UnityEdgeLengthBasedTess() Generates subdivision factors based on edge length
UnityEdgeLengthBasedTessCull() Based on Edge length with culling function to generate subdivision factors
Specific principles and operations suggest jumping to learn
https://catlikecoding.com/unity/tutorials/advanced-rendering/tessellation/

3. Geometry shader

Theoretically the geometry shader is the last shader stage run on the front end, after the vertex and subdivision surface stages, but before rasterization. Geometry shaders run once per primitive and have access to all input vertex data that makes up that processing primitive. Geometry shaders are also unique in the shader stages. He can programmatically increase or decrease the amount of data flowing through the pipeline. Subdivision shaders can also increase or decrease pipeline workload, but only indirectly by setting the subdivision level of the tile, in contrast to geometry shaders which can indirectly generate vertices to send to primitive assembly and rasterization.
The geometry shader takes the complete primitive (Primitive) as input data, and outputs the primitive that has been processed by us. We can create or destroy vertices in the geometry shader, fully controlling the number and type of primitives output. Both the input and output primitives of the geometry shader can be points, lines, or surfaces.

3-1. Application in Unity Shader

The code is as follows (example):

#pragma geometry geom
[maxvertexcount(1)]
 void geom(point v2g input[1],inout PointStream<g2f> outstream)
 {
    
    
     g2f o = (g2f)0; 
     o.vertex = input[0].vertex;
     o.uv = input[0].uv;
     outstream.Append(o);
}

#pragma geometry geo
specifies the method name of the geometry shader
[maxvertexcount(num)]
must be added to the method name of the geometry shader. It is used to define the maximum number of output vertices in the geometry shader. The output vertices can be different every time, but cannot exceed this value.
void geo(triangle v2g p[3], inout LineStream stream) { }
Geometry shader method, the return type is void.
triangle v2g p[3] is the input primitive, and triangle indicates that the input primitive type is a triangle. The input primitive types are as follows.
insert image description here
insert image description here
inout LineStream stream is the output primitive. inout is a keyword, LineStream indicates that the output primitive type is a line, and g2f is the structure from our custom geometry shader to the fragment shader.
insert image description here
Specifically, you can jump to
https://docs.microsoft.com/en-us/windows/win32/direct3dhlsl/dx-graphics-hlsl-geometry-shader#return-value

For example (grid effect)

Shader "Unlit/Gemo3Shader"
{
    
    
    Properties
    {
    
    
        _MainTex ("Texture", 2D) = "white" {
    
    }
        _Color ("Color",Color) = (1,1,1,1)       
    }
    SubShader
    {
    
    
        Tags {
    
     "RenderType"="Opaque" }

        Pass
        {
    
    
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma geometry geom

            #include "UnityCG.cginc"

            struct a2v
            {
    
    
                float4 vertex : POSITION;               
                float2 uv : TEXCOORD0;
            };

            struct v2g
            {
    
    
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct g2f
            {
    
    
                float2 uv : TEXCOORD0;              
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;
            fixed4 _Color;         

            v2g vert (a2v v)
            {
    
    
                 v2g o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }

            [maxvertexcount(3)]//只有顶点输出,那就定为1
            void geom(triangle v2g input[3],inout LineStream<g2f> output)
            {
    
    
               g2f o = (g2f)0;
                o.vertex = input[0].vertex;
                o.uv = input[0].uv;
                output.Append(o);

                o.vertex = input[1].vertex;
                o.uv = input[1].uv;
                output.Append(o);

                o.vertex = input[2].vertex;
                o.uv = input[2].uv;
                output.Append(o);
                output.RestartStrip();

            }

            fixed4 frag (g2f i) : SV_Target
            {
    
    
                // sample the texture
                fixed4 col = tex2D(_MainTex, i.uv);
                col *= _Color;
                return col;
            }
            ENDCG
        }
    }
}

4. Primitive assembly

After the front end of the pipeline has run (including vertex shading, tessellation, geometry shading), the fixed-function part of the pipeline performs a series of tasks that take the scene represented by the vertices and convert it into a series of pixels that need to be written by the shading Screen. The first step in this process is primitive assembly, which aggregates vertices into lines and triangles. Points also do primitive assembly, but that's trivial in this case. Finally, the parts of the primitive that are determined to be potentially visible are sent to a fixed-function subsystem called the rasterizer. This subsystem determines which pixels will be covered by a primitive (point, line or triangle) and sends this pixel to the next stage, fragment shading.

5. Cropping

Once each vertex constitutes a primitive, it will be clipped for the displayable area, which is usually a window or a screen.
While the output from the front end is in four-component homogeneous coordinates, clipping occurs in a Cartesian coordinate system. Therefore, in order to convert from homogeneous coordinates to Cartesian coordinates, Opengl performs perspective segmentation, that is, all four position components are divided by the last w component. This makes it possible to project the vertices from the homogeneous coordinate space to the Cartesian coordinate space, keeping w 1.0. The positions resulting from the projection split are in the normalized space. The normalized device space region visible in Opengl is the volume from -1.0 to 1.0 on the x and y axes and from 0 to 1.0 on the z axis. Users can see any geometry contained within this area, everything outside of it should be discarded. The six sides of this volume are composed of three-dimensional planes. Because a plane divides a coordinate space in two, the volumes on either side of the plane become half spaces.
insert image description here
Opengl performs clipping by determining which side of the plane each primitive vertex is on before passing the primitive to the next stage. Each plane has an outer side and an inner side. A primitive is discarded if all its vertices are outside a plane. If all vertices of the primitive are inside the plane, the primitive is passed on unchanged. Special handling is required if the primitive is only partially visible. About the clipping algorithm The following will introduce the two-dimensional one in the screen clipping, and the three-dimensional clipping can be expanded by this, which will not be expanded here.
insert image description here

5. Viewport conversion

After clipping, all vertex coordinates of the geometry are in the -1.0 to 1.0 region on the x and y axes. On the z-axis, however, the vertex lies in the region 0.0 to 1.0, which is known as a normalized device. The window coordinates we draw usually sit at (0, 0) to (w - 1, h - 1), where w and h represent the pixel width and height of the window, respectively. The process from normalized device coordinates to window coordinates is called viewport transformation.
insert image description here
The conversion form is as follows
insert image description here
where Xw, Yw, Zw are the resulting coordinates of the vertex in the window space, Xd, Yd, Zd are the incoming coordinates of the vertex in the standardized device, Px and Py are the viewport width and height in pixels, n and f are the near plane distance and the far plane in z coordinates. Finally Ox, Oy, Oz represent the origin of the viewport.

6. Elimination

Triangles can optionally be passed through a stage called culling before further processing. This stage determines whether the triangle is facing or facing away from the viewer, and based on that calculation decides whether to actually draw it. A triangle is considered front-facing if it is facing the viewer, otherwise it is back-facing. Back triangles are usually discarded because any back triangles will be hidden by other triangles when the object is closed.
In order to determine whether a triangle is front or back, Opengl will determine its directed area in window space. One way to determine the facing of a triangle is to take the cross product of the two sides. If the result is +, the triangle is considered front and otherwise it is back.

6. Rasterization

Rasterization refers to determining which fragments can be covered by primitives such as lines or triangles. Opengl will set a bounding box for the triangle in window coordinates and test all fragments within it to see if the fragment is inside or outside the triangle. In order to do this, Opengl will treat the three sides of the triangle as a half-space, splitting the form into two parts. Fragments that lie inside all three sides are considered inside the triangle, while fragments that lie outside any of the three sides are considered outside the triangle.
insert image description here

6. Fragment shader

The fragment shader is the last programmable stage of the Opengl graphics pipeline. This stage is responsible for determining the color of each fragment. The clips are then sent to the framebuffer for compositing windows. After the rasterizer processes the primitives, it produces a list of fragments that need to be shaded and passes this list to the fragment shader. At this point, the workload of the pipeline will explode, and each triangle may generate hundreds, thousands, or even millions of fragments.

Summarize

The above is the first part of the sharing of the graphics pipeline concept. The second article will share the concept of the next frame buffer operation (that is, the operation of the piece by piece), which involves some algorithms and is quite long, so I choose to divide it into two parts.

quote

OpenGL Super Collection
Unity Shader Getting Started Essentials

Guess you like

Origin blog.csdn.net/weixin_39289457/article/details/125355385