Study notes (3): graphics rendering

Graphics rendering notes:

Rendering is generally divided into offline rendering and real-time rendering. We use real-time rendering in games.

1. Real-time rendering can be divided into 2D rendering and 3D rendering.
In the early 2D rendering, pictures were drawn by moving a piece of memory (picture pixel data) to another (display buffer).

But now many 2D rendering methods have adopted 3D rendering. The reason is that the current graphics card architecture is conducive to parallel processing, faster drawing speed, can easily provide various 3D effects, but also save image resources and memory.

A brief summary of 3D rendering is to take out the data of all points of a model in a space, transform it into a 2-dimensional screen, and then draw it on the screen according to various parameters and data.

Almost all games now use 3D rendering process.

The basic unit in 2.3D rendering is the vertex, and the vertex data information includes the following

  • Position (relative to model position)
  • Color (optional, can be processed later)
  • Normal (mainly used for lighting calculations, you can know which side is the backlight)
  • Bone weight (a point may follow the movement of multiple bones)
  • Texture UV

3. MipMap is generally automatically generated, and the characters in the distance automatically use small textures to reduce unnecessary overhead. When
MipMap is generated, color mixing may occur due to edge problems, so a few pixels can be left where a texture is obviously different

4.DX Microsoft's graphics programming API , the shading language is High Level Shader Language (HLSL), and
OpenGL (OpenGraphicsLibrary) can only be used on Windows . The shading language is OpenGL Shader Language (GLSL) cross-programming language, cross-platform

5. Both DX8 and OpenGL 2.0 support the programmable rendering pipeline
vertexShader and pixelShader. The updated version also supports GeometryShader

6. ES2.0 pipeline process
Vertex buffer data-Vertex Shader-PrimitiveAssembly (graphic element assembly)-[If any GeometryShader] Rasterization-Fragment Shader (ie Pixel Shader)-PerFragementOperation-FrameBuffer

7. A brief description of the ES2.0 pipeline
Vertex Shader
vertex transformation movement, rotation, scaling, coordinate system transformation, projection transformation,
lighting calculation method, line transformation and normal normalization,
texture coordinate transformation, UV modification, offset scaling, etc.

PrimitiveAssembly
supports three primitives: points, lines, and triangles.
Triangles need to be assembled into a triangular surface based on three vertices.
Perform cropping. You can choose complete culling, front culling or back culling (some transparent objects are not culled in rendering)

GeometryShader
processes a set of vertices that have formed a primitive, and can change the type and number of primitives

Rasterization Rasterization
turns a vector triangle into a bitmap image (filled pixels), and each vertex is automatically interpolated through color data.

Fragment (Pixel) Shader
obtains fragment information, that is, the depth and color of each pixel, etc. The color can be modified or the depth of the pixel (z-buffering) can be changed.
A Pixel Shader cannot produce complex effects because it only operates on one pixel without knowing the geometry of the scene

PerFragementOperation fragment test, further filter out unnecessary information, the specific test process is as follows:

  • Pixel ownership test: Test whether the pixel is visible to the user or not blocked by other windows
  • Scissors Test: Cutting test, to determine whether it is within the defined cutting area
  • Stencil Test: The template test determines whether the color value of the pixel should be written to the rendering target. This is a bit like a mask in the PS, which is equivalent to using a pot to cover the table, and the covered part is not rendered
  • Depth Test: Depth Test is the distance from the camera near the cutting surface, and remove the high depth
  • Blending: Blending, for special materials (such as translucent objects such as glass)
  • Dithering: Dithering display uses a small number of colors to express a wider range of colors. You can check it on the wiki https://en.wikipedia.org/wiki/Dither
  • FrameBuffer: the final rendering data

8. The basic principle of HDR
High-Dynamic Range images can provide higher color depth, wider dynamic range and stronger color expression in terms of performance, and are often used to adjust exposure.

**Basic principle: **In reality, the ratio of the brightness of the brightest object to the brightness of the darkest object is 108 , and the brightness information recognized by the naked eye is only about 105. But the display shows 256 kinds of brightness.
So the problem is that the brightness span of our display devices is generally too large (and the sensitivity of the human eye to different brightness is also different, refer to gamma correction https://en.wikipedia.org/wiki/Gamma_correction), so a set of corrections is needed The display system can be simply understood as HDR.

9. Why should a character's clothes be rendered in multiple frames?
Because the material in different positions may be different, and the processing of vertexshader in the pipeline may also be different, so it needs to be drawn separately.

10. Common rendering effects

  • **Global Illumination:** The core problem to solve is how to well represent the mutual reflection problem between objects. The most direct problem is how to find a more reasonable way to replace the ambient (ambient light) in local lighting. Multiple implementation methods, such as radiance, ray tracing, ambient occlusion, Light Probe, etc.
  • **Shadow: **The most popular ones are shadow mapping and shadow volume.
  • The basic principle of shadow mapping is that the depth of the scene is rendered into a depth buffer. We can obtain a shadow or shadowless texture in the scene, and then use this depth map for rendering.
  • The basic principle of shadow volume is to calculate the shadow volume in the scene based on the positional relationship between the light source and the occluder, and then detect all objects to determine whether they will be affected by the shadow.
  • **Distortion:** Distort the UV of a certain range of pixels

11. Post-processing is
equivalent to the processing of the picture after the rendering is completed. This is to process the pixels, not in the process of the rendering pipeline

  • **AO: **Ambient light occlusion, depicting the effect of blocking the surrounding diffuse light when the object and the object intersect or approach. The basic principle is that the depth of the intersecting position of the object is different, and the position with greater depth becomes darker. There is another SSAO commonly used in games now.
  • **Blur:** There are Gaussian blur, radial blur, etc. The basic principle is to perform an average blending operation on the colors of a certain range of pixels.
  • Depth of field : pixel blur if the Z value reaches a certain value
  • **Glow (light overflow): **The color of a certain point will spread to the screen space near it. You can perform a blur process first, and then Alpha blend the blurred picture with the original picture.

12. What is the difference between material, map, texture
map, texture, and material?


Learn shader notes in one class:

The first lesson:
1.
Shader is divided into two types: vertex Shader, Pixel Shader

2. In 3D space rendering, a texture rendering is actually similar to 3D rendering, except that it consists of four fixed points on a plane, and the rendering (similar to texture) information of the map is taken out for rendering.
Expansion: In traditional 2D games, the picture is drawn by moving one piece of memory (picture) to another (display buffer), and the
rendering is accelerated by the multimedia instructions of the CPU. For example, a two-dimensional character animation can be realized by making its multiple sequence frame pictures to
play in a loop . So we need to save multiple image resources, long loading time and memory usage.
Direct2D in DX is an extra layer of encapsulation on the basis of Direct3D. With the help of Direct3D, direct access to the underlying hardware is achieved
. The interface used simplifies some complex codes that directly use Direct3D to achieve 2D effects, so it can be considered The efficiency is the same as that of 3d, but the method of use is simpler.
Q: Is the rendering of our common UI using the 3D rendering pipeline? Yes.

3. Q: Will the vs function be executed without writing a shader? What is the relationship between the vertex processing in the default pipeline and the shader?
Answer: The shader can be omitted by default, and the rendering pipeline will output normally just like a traditional pipeline. After DX8 and OpenGL2.0, there will be a programmable rendering pipeline, and there are more VS and PS stages in the pipeline.

4. Q: What operations does the default vertex shader perform?
Answer: VS handles vertex transformation by default. It can be simply thought of as converting the vertex coordinates in the space to the vertex coordinates in the camera screen space, which involves multiple transformations.

5. The Shader function execution is executed by the GPU, and each function of each vertex is executed in parallel.

6. Question: The vertices need to be further assembled into triangles. How do you understand the assembly of primitives? At which step?
Answer: Primitive assembly is to assemble basic primitives that can be processed by the rendering pipeline based on a certain number of vertices. After vertex processing

7. Triangle is the smallest rasterization unit. The simple understanding of rasterization is to "bitmap" the vertex data and convert it from vertices to pixels.
To determine how many pixels a triangle occupies on the screen, all pixels are colorless during rasterization. You need to copy the color information from the texture.

8. The PS function is processed for pixels and is also parallel.

9. Note the two kinds of projection perspective projection perspective and orthogonal projection orthography.
The simple understanding of the process of perspective projection is to continuously compress a viewing cone, and the effect obtained is close to large and far small. Orthogonal projection means that the size is the same no matter where it is viewed from.

10. About the depth of field map
is composed of its Z value, and the outline can be clearly seen. The reason is that a place with a small Z value is shallow, and other objects have a large Z value at the edge. (0-1)

11.Z value depth application

  • a. Depth of field effect Z value up to a certain value, pixel blur
  • b. SSAO screen space ambient light occlusion. It is
    found that if the current position and the depth information of the nearby pixels are different, the position with greater depth will be darkened

The second lesson:
1. Render Texture (off-screen rendering)
rendering results can not only go to the screen, but also to the picture. There is a RenderTarget in Unreal, the principle is the same.

2. Vertex processing MVP

  • Modeling Transformation (model matrix transformation, model point relative coordinate conversion to relative world coordinate, translation, rotation, scaling)
  • View (relative perspective transformation, a point in the world coordinate system is converted to the camera coordinate system)
  • Projection (projection transformation, extrusion process, more complicated)

3. The simplest transformation in Shader is that MVP
takes the input vertices relative to the coordinates of its model, and outputs a two-dimensional coordinate corresponding to the screen space after the MVP transformation (the result is 3 dimensions, the other two dimensions can be considered invalid)

4. Before DX9, the rendering pipeline is almost fixed, and we cannot use the shader to participate in the intermediate rendering

5. For some vertices, we directly return their fixed coordinates on the screen, which can achieve a similar UI effect

6. The VS function can be used to process skinned bone animation, and process rendering according to weight

7. CPU and GPU architectures are different, and they cannot access video memory and content each other

8. The CPU will get the model's vertices and other data at the beginning, which needs to be UpLoaded to the GPU, and then the GPU
can process this UPload process. This UPload process cannot be performed frequently. Because there are a lot of vertex data, we generally see that the model changes during rendering. His position has not changed

The third lesson:
1. The realization principle of the BillBoard advertising board: the
effect is that an object is always facing the camera. The principle is to do MV processing first, and force the advertising board to expand along the camera window plane before projection
so that the image is always facing the camera, and then do the projection change P processing.

2. The parameters passed in by the shader can be specified by yourself

3. Graphic element assembly details
. A VertexBuffer is used as a vertex buffer, and an IndexBuffer is used as an index buffer. In the index buffer, every three vertices correspond to a triangle (the order of vertices cannot be reversed)

4. Rasterization is a lossy process.
Turning a vector triangle into a bitmap image, each vertex will be automatically interpolated through color data.

5.
UV UV is the coordinate position of the picture corresponding to each of our rendering vertices. When rendering, we need to find the corresponding color information on the picture according to the UV data of the current vertex.

6. Sampling
is a step of rasterization, which is the color information from the corresponding position of the picture according to the UV described above.

7. The output of the VS function is the input of the PS function, and the output of the PS function is the color value

8. Since the number of vertices is limited, the UV information obtained by PS needs to be interpolated in VS

9.FilterMode

  • Point sampling (only sampling at the specified position)
  • Bilinear sampling (sampling at 4 nearby locations)
  • Trilinear (twice, once at four positions in the fine image, and the second time at four positions in the small image)

10.shader application

  • Crop picture
  • Mosaic (first enlarge by 100 times, then reduce the value and ignore the decimal point, some only repeat it will be ignored)
  • Shadow (add a camera from the direction of the light source, first render once to record the depth. The final rendering uses this information)
  • Full-screen floodlight (can affect pixels outside the model, adjust the color of other pixels from the current pixel)
  • UI processing can directly discard the rendering in certain positions to realize the puzzle function

11. Reducing sampling is conducive to performance improvement.
Original link (please indicate if reprinted): http://blog.csdn.net/u012999985/article/details/79090524

Guess you like

Origin blog.csdn.net/qq_43801020/article/details/108924688