Unity Shader learning record (11) - how to achieve transparency effects

1 Two methods of transparency effect

Transparency is an effect often used in games. To achieve transparency effects in real-time rendering, you usually control its transparency channel (Alpha Channel) when rendering the model. When transparency mixing is turned on, when an object is rendered to the screen, in addition to the color value and depth value, each fragment has another attribute - transparency.

When the transparency is 1, it means that the pixel is completely opaque, and when it is 0, it means that the pixel will not be displayed at all. In Unity, we usually use two methods to achieve transparency effects. The first is to use transparency test (Alpha Test). This method cannot actually get a real translucent effect; the other is transparency blending (AIpha Blending).

Depth buffer:
The existence of depth buffer (also called z-buffer) in real-time rendering is used to solve the visibility problem. It can decide which parts of which object will be rendered in front. , and which parts will be obscured by other objects. Its basic idea is to determine the distance of the fragment from the camera based on the value in the depth buffer. When rendering a fragment, you need to compare its depth value with the value that already exists in the depth buffer (if it is turned on Depth test), if its value is farther from the camera, then this fragment should not be dyed on the screen (there is an object blocking it); otherwise, this fragment should overwrite the pixel value in the color buffer at this time, And update its depth value to the depth buffer (if depth writing is enabled)

Transparency test : It uses an "overbearing extreme" mechanism. As long as the transparency of a fragment does not meet the conditions (usually less than a certain reading value), then its corresponding fragment will be discarded. The discarded fragment will not undergo any further processing and will not have any impact on the color buffer; otherwise, it will be processed in the same way as ordinary opaque objects, that is, depth testing, depth writing, etc. In other words, transparency testing does not require turning off depth writing. The biggest difference from other opaque objects is that it discards some fragments based on transparency. Although simple, the effects it produces are also extreme, either completely transparent, which is invisible, or completely opaque, like opaque objects.

Transparency Blending : This method can get a true translucent effect. It will use the transparency of the current fragment as the mixing factor to mix with the color value already stored in the color buffer to obtain a new color. However, transparency blending requires turning off depth writing (we will talk about why it needs to be turned off below), which makes us very careful about the rendering order of objects. Note that transparency blending only turns off depth writing, but not depth testing. This means that when rendering a fragment using transparency blending, its depth value will still be compared with the depth value currently in the depth buffer. If its depth value is further away from the camera, no further blending operation will be performed. This determines that when an opaque object appears in front of a transparent object, and we render the opaque object first, it can still block the transparent object normally. That is, for transparency blending, the depth buffer is read-only.

2 The importance of rendering order

As mentioned earlier, for transparency blending technology, depth writing needs to be turned off. At this time, we need to carefully handle the rendering order of transparent objects. So, why do we turn off depth writing? If we don’t turn off depth writing, the surface behind a translucent surface can be seen by us through it. However, because the depth test results are based on the distance between the translucent surface and The camera is closer, so the surface behind will be eliminated, and we will not be able to see the objects behind it through the translucent surface. However, by doing this we break how the depth buffer works, which is a very, very, very (important thing to say three times) bad thing, even though we have to do it. Turning off depth writing causes rendering order to become very important.

Insert image description here

In the first case, we render B first and then A. Then, since the opaque object has depth testing and depth checking turned on and there is no valid data in the depth buffer at this time, B will first write to the color buffer and depth buffer. Then we render A, and the transparent object will still be depth tested, so we find that A is closer to the camera than B. Therefore, we will use the transparency of A to mix with the color of B in the color buffer to get the correct translucency. Effect.

In the second case, we render A first and then dye B. When A is rendered, there is no valid data in the depth buffer so A writes directly to the color buffer, but since depth writing is turned off for translucent objects, A does not modify the depth buffer. When B is rendered, B will perform a depth test and find, "Hey, no one has been in the depth buffer yet, so I can safely write to the color buffer!" The result is that B will directly overwrite the color of A. Visually, B appears in front of A, which is wrong.

Insert image description here

We still consider the different results of different rendering orders. . In the first case, we render B first and then dye A. Then B will be written to the color buffer normally, and then A will be mixed with the B color in the color buffer to obtain the correct translucent effect. In the second case, we render A first and then dye B. Then A will be written to the color buffer first, and then B will be mixed with A in the color buffer. In this way, the mixing result will be completely reversed. It will look like B is in front of A, and the result will be a wrong translucent structure.

Consider the situation where objects partially overlap
Insert image description here

(1) First render all opaque objects and enable their depth testing and depth writing.
(2) Sort the translucent objects according to their distance from the camera, then render these translucent objects in order from back to front, and turn on their depth testing, but turn off depth writing.
So, are the problems solved? Unfortunately, still not. In some cases, translucent objects will still appear "through the lens." If we think about it carefully, the dyeing order in step 2 given above is still ambiguous - "sort them according to their distance from the camera", so how is their distance from the camera determined? Readers may Will immediately blurt out, "It's the depth value from the distance camera!" However, the value in the depth buffer is actually at the pixel level, that is, each pixel has a depth value, but now we are sorting at the individual object level, which means sorting The result is that either object A is rendered entirely in front of B, or A is rendered entirely behind B. But if there are overlapping loops, you'll never get the right results using this method.

Insert image description here
The depth value of each point on this grid may be different. Which depth value do we choose as the depth value of the entire object to sort with other objects? Is it the middle point of the grid? Or the farthest point? Or The closest point? Unfortunately for the situation in Figure 8.4, no matter which depth value you choose, you will get the wrong result. Our sorting result is always A in front of B but in fact A is partially occluded by B.

This also means that once a judgment method is selected, erroneous occlusion problems will definitely occur between semi-transparent objects in some cases. The solution to this problem is usually also to split the mesh.

Although the conclusion is that there will always be some situations that throw us off our guard, the above method is effective enough and easy to implement, so most game engines use this method. In order to reduce the situation of incorrect sorting, we can make the model as convex as possible, and consider splitting complex models into multiple sub-models that can be sorted independently, etc. In fact, even if the ordering is wrong, sometimes the results will not be very bad. If we don't want to split the mesh, we can try to make the transparent channel softer so that the interleaving does not look so obvious. We can also use the translucency effect with depth writing turned on to approximate the translucency of the object.

3 Unity’s rendering queue order

Unity provides a solution called render queue to solve the problem of rendering order. We can use the SubShader's Queue tag to determine which dye queue our model will belong to. Unity internally uses a series of integer indexes to represent each rendering queue, and the smaller the index number, the earlier it is dyed.
Insert image description here
Insert image description here

Guess you like

Origin blog.csdn.net/weixin_45810196/article/details/129814308