ShaderLab development practice - Shader basic concepts

The first chapter of the book introduces the basic concepts of Shader and the implementation language. There is not much content. The first chapter can be summarized in the following points.

1. Shader runs on the GPU to perform shading processing, light and shadow calculations, and texture color rendering on three-dimensional objects.

2. The programming method of Shader has experienced the development from the initial fixed pipeline to the programmable pipeline. Shaders are divided into vertex shaders and fragment shaders in the programmable pipeline. The vertex shader has the function of processing the vertices of transformed mesh objects. After the mesh is gridded by hardware, the fragment shader will perform various tests on each fragment (prepared pixel), and finally render it into visible pixels.

3. There are currently three Shader implementation languages, HLSL (High Level Shading Language) provided by Microsoft, GLSL (OpenGL Shading Language) provided by OpenGL, and Cg (C for graphics) provided by NIVIDIA. The focus of programming language support for Shaders in Unity is Cg.

The above is the entire content of the first chapter. There are several important knowledge points worth thinking about and expanding:

  1. Why is Shader for 3D objects?
  2. DirectX and OpenGL
  3. OpenGL rendering process?
  4. What is the difference between fixed pipeline and programmable pipeline?
  5. Reference learning of Cg language?

The following will analyze and answer the four questions of the appeal one by one, and the answers will be based on the information collected on the Internet and their own understanding.

1. Why is Shader aimed at 3D objects?

    In the definition of Shader in the book, it is specifically mentioned that Shader is the rendering of coloring, light and shadow and texture color for three-dimensional objects. In fact, there are two good explanations for this problem.

    First of all, it is better to say that it is a three-dimensional object than an object in three-dimensional space. This object can be a 2D plane or a 3D object. In Unity3D, both 3D, 2D objects and UI are processed, rendered and displayed through Shader. effect. This is not to say that 2D plane or UI cannot use Shader, but all objects in 3D space are rendered and displayed through Shader.

    Secondly, the main work of Shader is to render objects with coloring, light and shadow, and texture. Here, two-dimensional space and three-dimensional space are compared. In two-dimensional space, objects can be simply understood as a plane and a piece of paper, and Compared with the three-dimensional space, the two-dimensional space obviously lacks the transformation of light and shadow, and the processing of light and shadow is the most critical point in the shader. The presentation of light and shadow requires the orientation of the light source and the object. In two-dimensional space, the object and the light source are always parallel, and the texture and color changes cannot be seen, and even the object cannot be seen, let alone the Shader pair. object processing.

    So Shader mainly runs on the GPU to process three-dimensional objects.

2. DirectX and OpenGL?

     OpenGL is a professional 3D program interface, a powerful and easy-to-call underlying 3D graphics library, OpenGL is a hardware-independent software interface, and can be used on different platforms such as Windows 95, Windows NT, Unix, Linux, MacOS, OS / 2 for transplantation. Therefore, software supporting OpenGL has good portability and can be widely used. In the field of professional high-end graphics, OpenGL cannot be replaced.

    OpenGL remains the only API capable of replacing Microsoft's full control of 3D graphics technology. Game developers are an independent-minded group, and many important developers are still using Open GL today. Therefore, hardware developers are trying to strengthen its support. Direct3D currently cannot support high-end graphics devices and professional applications; Open GL dominates these areas.

    DirectX is an application programming interface (API), which can be divided into four parts according to the nature, display part, sound part, input part and network part.

    Among them, DirectDraw (DDraw) and Direct3D (D3D), the former is mainly responsible for 2D image acceleration. It includes many aspects: we use DDraw to play mpg, DVD movies, watch pictures, play small games, etc. You can understand it as DDraw for all the underlined parts. The latter is mainly responsible for the display of 3D effects, such as scenes and characters in CS, characters in FIFA, etc., all of which use DirectX's Direct3D.

    On the surface, it seems that D3D supports more functions than OpenGL. In fact, because D3D does not support hardware extensions, such as hardware panoramic shadows, hardware rendering order irrelevant translucent materials and other new technologies cannot be used at all, and the functions provided by D3D (specifically D3D8) itself Only a small part can be emulated when using HAL and the hardware does not support it, you have to use a lot of code to analyze the hardware capabilities and adopt different strategies.

3. OpenGL rendering process

    Before introducing the OpenGL rendering process in detail, it is recommended to inquire about GPU-related knowledge , which will help you understand the OpenGL workflow.   

    Here are several key stages in the GPU graphics processing process:

    Tessellation process

    The Tessellation process is actually the process of determining the structure of the geometric model by the GPU. It is also necessary to change the structure of the geometric model, such as changing the vertex position, adding new vertices, and the function of the subdivision surface. The subdivision surface can be understood in this way, adding in a triangle One vertex is connected to the vertex of the original triangle, so that three triangles will be added, and the position of the intermediate vertex will be adjusted appropriately to make the previous surface shape smoother. The following figure shows the process of Tessellation confirming the construction of the geometric model.

    The relationship between graphics and decoration in the first episode of the GPU Encyclopedia

    TMU unit

    In the GPU, the TMU unit is responsible for addressing and grabbing the texture library and the texture in the video memory. The programmer will pre-bake some materials. These materials have basic color and shape characteristics. The TMU unit will be based on the needs of the geometric surface. Locate and grab appropriate materials in the library to form the basic appearance of the surface of the object, so that the subsequent Shader unit can calculate the color correctly, and at the same time reduce the pressure of the entire rendering process.

    Basic Material Operations:

The relationship between graphics and decoration in the first episode of the GPU Encyclopedia

    Material grabbing process

    The relationship between graphics and decoration in the first episode of the GPU Encyclopedia

    US unit

    The US unit of the GPU will calculate the RGB value of each pixel according to the needs of the program. Regardless of light, shadow or color, the color expressed by these values ​​can finally be correctly reflected. After the ALU in the US unit completes the calculation of the RGB value, it will be transmitted to the ROP unit for blending with the texture.

    ALU:

                    The relationship between graphics and decoration in the first episode of the GPU Encyclopedia

    Deferred shading

    During the GPU rendering process, operations such as lighting will be directly transferred from the traditional Pixel Shader front end to the MRT (multi-target rendering) at the back end of the entire pipeline according to the needs of the graphics. This delay changes the order of shader operations, continuing it right before the blending

    Deferred shading process

    The relationship between graphics and decoration in the first episode of the GPU Encyclopedia

    ROP unit

    The last step of GPU image processing, all pixels are filled into the texture, and the texture finally gets the correct performance. In the specific steps, ROP will mix the RGB value calculated in the ALU with the texture captured by the TMU unit and output it.

    Shader and texture mixing process:

    The relationship between graphics and decoration in the first episode of the GPU Encyclopedia

    I will not continue to introduce the graphics processing of the GPU here. If you are interested, you can check it yourself in the link above.

    Return to topic and then introduce the rendering pipeline in OpenGL.

    The term pipeline describes the entire process of OpenGL rendering. OpenGL adopts the cs model: c is cpu, s is GPU, the input of c to s is vertex information and texture information, and the output of s is the image displayed on the display.

    The following two figures clearly describe the rendering pipeline in OpenGL:

    

    Vertex Processing is the vertex processing stage of OpenGL, which is responsible for the vertex processing pipeline. It mainly completes the process of converting 3D object vertices to screen space coordinates. It is mainly divided into vertex data processing, model view transformation, projection transformation, clipping processing, perspective division processing, and pilot processing, and the final output is screen coordinates.

    2015-2-4 16-46-38

    Rasterization rasterization processing. After the vertex processing stage of OpenGL is completed, the primitive is used as the basic unit to generate the position coordinates and attributes of the primitive, and then rasterization will use the primitive vertex to calculate the attribute of the primitive covering all pixels by means of difference values. This process is called OpenGL's rasterization process. This process is divided into two steps, the first determines which integer grid regions in the window coordinates are occupied by the primitives; the second assigns a color value and a depth value to each region.

    Fragment Processing Fragment processing. After the rasterization process maps the primitives to the pixels of the screen, the fragments required for the fragment processing will be generated, and the fragment processing will crop the fragments and calculate the final color value of the pixel in combination with the light, shadow, and light color. Afterwards, the depth test and the mixed test stage will be used to determine whether the pixels need to be discarded. finally displayed on the screen.    

Finally, a link     is attached , which introduces the principle of OpenGL in detail.

4. The difference between fixed pipeline and programmable pipeline

    The OpenGL rendering pipeline has evolved from a fixed pipeline to a programmable pipeline.

    The biggest difference between the fixed pipeline and the programmable pipeline is whether there is a Shader to participate in OpenGL drawing.

    The vertex shader and fragment shader are used in the programmable pipeline to complete the OpenGL rendering process. The vertex shader reads vertex information from the cpu, including vertex position, color, and texture coordinates. After that, the output vertex combination enters the primitive assembly stage (Primitive Assembly), converts the vertices into primitives (triangles, lines, point sprites and other elements), and generates a two-dimensional drawing that can be drawn on the screen after rasterization. Pixel fragment, and then calculate the final color value of the pixel through the fragment Shader combined with lighting and other processing, after fragment by fragment processing, put the generated fragment into the framebuffer, and finally generate the pixels on the screen.

    Shader is not used in the fixed pipeline to complete the drawing of OpenGL. The specific process can refer to the following figure, which will not be emphasized here.

    

5. Reference learning of Cg language

    The introduction to the Cg language will be introduced in the next article.

    

    

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325482774&siteId=291194637