Study Notes 22

——————————————————————————————————————————————————————————————————————————————————————————————————————————————

Rendering12

First of all, for the transparency effect in the previous section, we did not deal with any shadow-related issues. Now if the transmission shadow gets the above result,

In fact, we can analyze this phenomenon through the debugger:

This is the first step of rendering. It can be seen from this that when the floor is rendered, its shadow comes out. There is one piece missing here. The reason is actually the principle of the shadow. It is the shadow map/

A block that should be shadowed, but is not, should have failed the depth test.

In fact, when we generate the shadow map, we write the depth of the transparent object on the top according to the complete opaque.

So in the part of the gap, what we write is the actual depth of the transparent object on the side. So when it comes to rendering, there seems to be a problem with this analysis.

The problem here seems to be that there is a problem directly when writing, and the second one is not directly written into the buffer at the part of the gap, so the final color is this floor

? ? ? ? ? ? ? ? ? ? ? ? ? ? ? There are some problems here, and it seems that no analysis is correct. . . .

So the next step is to do a clipping of the shadow.

For the clip operation, we need to use alpha, that is, we need to sample albedo.

Then the code of the shadow caster needs to be changed. In addition, since the previous shadow caster is divided into point light sources and other light sources, when writing it, it is directly #if electric light, #else other light.

That is to write two copies of the code. If we now add the sampling alpha code here (which also requires various judgments), it needs to be done twice, so here is a rewrite, merging the same parts of the previous two branches,

Merge into one, and then add the alpha sampling code.

Because the cginc of this shadow is expanded in the pass of the main program when it is used, so the definition of variables is naturally here.

This is a simplified macro judgment.

Here, when he added judgment to UV, he did not add it to the vertex data. I don’t know what the reason for this is. . . .

So the operation process:

Make a UV judgment macro, and then obtain the UV coordinates according to the judgment. After acquisition, sample in the fragment to get alpha, and then you can clip with alpha

Finally, don't forget these two, since we use macros, we must write precompiled instructions for him so that it can be linked with the panel.

Because keywords will be specified on the panel, and keywords play a role here. Here, according to the specified keywords, the corresponding macros are defined, and then the subsequent conditional judgments are meaningful.

There is an error-prone point here, that is, for the type of uv, although there is only such a pair of uv when considering the shadow, sometimes it is written as float by mistake or in a hurry.

At this time, it will lead to assignment errors, and then sampling errors will occur. Therefore, if there are problems with the texture sampling results in the future, then check whether the type is written correctly.

About the difference between this mul matrix and the Unity package

Although the w component defaults to 1, the compiler does not default to it, and a verification operation may be required, so the official one is used, and 1 is directly used as the fourth component without detection and verification.

The partial shadow here, let me talk about its main idea first, and first talk about the idea of ​​​​the shadow of this type of transparent object. In fact, in this example, they all transmit the shadow of a cube.

And we use clip to clip some pixels of the object when generating the shadow map, and then it is equivalent to no longer casting shadows on this part of the object.

(Because after the clip, when the things behind him detect whether they are in the shadow, they will find that they are the closest to the light in the current direction in the shadow map, so if they are directly illuminated, there will be no shadow attenuation, and the front will no longer look at her casting shadows)

To put it bluntly, it is to give us a big blackboard, and then we dig holes on it, and after digging, we stick it on the object behind the current object. This is the shadow.

The way to realize the upper shadow is to dig a hole on the blackboard according to the outline of the current object after it is transparent.

However, the hole-digging method above is a binary one, which cannot reflect the shadow of transparent objects, that is, there is also a distinction between light and dark shadows.

But the matter of digging holes is binary in itself. How to have a distinction between light and dark, in fact, partial shadow is used here.

That is, by digging very small holes on the board, and then controlling the density of the digging, so that the final shadow effect displayed will have that kind of shadow with different light and dark.

Here we still need to determine the mode to use according to the settings in the panel, so we still need to add two more macros to judge.

When we engage in fade and translucency mode, UV is still needed here, so we modify the definition of the UV macro as above.

For the shadow of translucent objects, it is a bit similar to soft shadow, that is, the shadow part is indeed blocked, but it is not completely protected from light.

It is an effect of receiving light but not completely lighting, occluding but not completely occluding. Therefore, even if a point light source is used to irradiate a translucent object in reality, the shadow is relatively soft.

According to the digging we said before, this effect will never be achieved.

 if a surface lets half the light through

So he cleverly used the above method to let the light pass through half. The checkerboard pattern mentioned here should refer to one black and one white, so let the light pass through half.

When the black and white is very small, consider the limit, when the black is close to white, it is equivalent to any point on the surface, it receives a portion of light, and half of it will pass through, and the remaining half will be accepted by itself.

Further More

We can choose more patterns, change the ratio of black and white, and then mix multiple patterns, so that we can finally get different through shadows.

That is, shadows of different brightness.

Then Unity provides us with the dither pattern

We can see that he is from 1 to 16.

So how should we sample this thing? First of all, according to our purpose, let's not talk about which level to sample, just any level, what we want is

In the light direction, they perform a light-transmitting and opaque according to the format in the dither pattern.

Note that it is in the direction of the light, not based on the surface of the object. Here is an example to understand. Suppose we choose a pattern that is black and white one to one, then we choose this pattern

What you want is that half of the light passes through the object, and half of the reflection and absorption on the surface of the object.

If the model we use is a small person with uneven surface, if we use the UV on the surface of the object to sample, then there is no guarantee that half of it will be transmitted.

The situation is more extreme, such as those ears, noses, bumps, right? Coincidentally, their protrusions and depressions are covered with black, and white is placed on the transitional sides between the bumps.

Then if you go to transmit light, it will be less than half, because the amount of light transmitted depends on whether the area of ​​the white block perpendicular to the light direction is half.

So sampling like this will definitely not work. So what coordinate sampling is perpendicular to the light? That's right, screen coordinates. Because here is a shadow map, so the light is at the position of the camera.

So the direction of the light is perpendicular to the plane of the screen coordinates. So if you sample according to the screen coordinates, that is, directly paste this black and white image vertically on the light. that will surely guarantee

Light transmission is half.

So this screen coordinate is used here, and we can get it through VPOS.

For this screen coordinates, there may be some problems:

The meaning here is that we write both of them in the interpolator, there may be conflicts on some platforms, and there may be problems when assigning values.

And because the position of the vertex output must be output, but it does not have to be the input of the fragment

So, we can separate the vert output and the frag input into two structs.

For vert output, we let him include position, but for frag input, since we don’t need this position, we directly remove it, and then we will use VPOS

Add it to him, so that there is only VPOS in the frag input structure, and there will be no conflicts.

Screen coordinates and uv coordinates, the former is to render things on the screen first and then map them to the screen, only take the piece of the object silhouette, the latter is to first paste the texture on the object in the way of developing uv, and then get it on the screen

(But the use of screen coordinates here is not for textures, but for digging holes, which is similar.)

When sampling dithering, the smaller the coefficient in front of the screen coordinates, the smaller the screen, the smaller the sampling resolution, and the larger the sampled black point. Just think of an infinitely tiled texture, and then use a screen-sized frame to scale it by a factor to frame a range on the texture. Then there is the texture result of the corresponding place on the screen

The top is Vert

Below is the frag

Why is there more Position (position output by vert) above, the black box said, because it is all judged by #if, it is possible that none of the three is true, and some problems will occur at that time.

 

So add an else to the position, so that the problem can be avoided, and he and VPOS are mutually exclusive, and there will be no conflict here, no problem at all.

Regarding use, those dithers we want to use unity need a sampler3D, and the name is determined.

Then the first two dimensions of the coordinates used are the screen coordinates, and the third dimension is the level.

Then if his return value is 0, it means that the sample has reached the place to be hollowed out. Just clip.

Another problem is that our screen coordinates are often thousands, and the texture width of sampling is 1, that is, thousands of tiles are required, which may be too dense, so we can adjust a coefficient:

It should be the pattern pointed by the arrow, but there is not much connection between the two, because the pattern has been tiled thousands of times,

Then the coordinate screen is just for sampling, not the meaning of the square on the left corresponds to the one on the right

Now we take alpha into consideration and ask him to modify the pattern. The larger the alpha, the larger the pattern should be, that is, the less light is transmitted.

Then you can get the above result. Obviously, the hole we dug is too big, and the shadow effect is not obvious, so the coefficient can be modified here:

Then here are two issues, one is the resolution, we just said a limit case, that is, there are infinitely many holes to dig, but in fact, the most holes to dig is the number of screen pixels

So the final effect here is limited by the resolution.

Then after adding a filter, the effect will be more realistic.

Finally, it is discussed that the shadow in the dynamic situation is very bad, the swimming effect is very serious, and this translucent object

Shadows are not allowed in unity.

Translucent + light source softshadow, hardshadow, cutout+hard shadow

In fact, considering some limitations of translucent shadows, sometimes translucent objects may abandon translucent shadows and choose cutout shadows.

To achieve this, a switch can be opened on the panel.

The switch corresponds to a keyword here, and then some adjustments are made to the previous logic according to the keyword.

Here we want to use cutout for situations that do not support translucency. One way is to write the or condition before the clip judgment of cutout.

Or directly define a macro corresponding to cutout. (The method used in this question.)

The main thing is to define a function used by our switch to display the switch. The display of this switch must be in a translucent state.

Then

Prepare and handle work before and after showing.

This time we have to display the cutout.

————————————————————————————————————————————————————————————————————————————————————————————————————————

Rendering 13

The first is the setting of delayed rendering, which I have touched before, mainly through the setting of graphics, and then for each camera, I can choose to use the result set in the graphics setting.

You can also choose to directly overwrite the settings in the graphic setting with a certain mode.

When the depth map here is generated, there is dynamic batching.

The draw calls of the two lights are different. This is easy to explain, because the positions of the lights are different, and the number of objects illuminated by the two lights is also different. There may be some occlusion from the perspective of the lights.

 

When we turn on deferred rendering, MSAA will be turned off because it is based on sub-pixels. Delayed rendering does not play that, so if you want to anti-aliasing, you have to find some post-processing methods.

Except for the drawcall of Gbuffer, most of the others are in the shadow.

First look at the forward rendering, what has been done, it can be seen that there are a lot of repeated operations.

Then if we consider caching some reused data, it can save a lot of calculations.

In fact, I found that every pixel except the light itself in the lighting calculation is not cached, and everything else is cached.

So directly do not perform any lighting calculations in the first pass, and only use it to write data to the Gbuffer.

Hence,deferred shading

Delayed rendering - Zhihu (zhihu.com)

In fact, the core of the refinement is forward rendering. In the case of multiple light sources, a lot of invalid traversal will be done, because the range of light sources (when there are multiple light sources, there are not many parallel lights, and most are point light sources and spotlights) is not large, but to traverse The entire scene seen by the camera, resulting in a large number of invalid calculations.

The core of deferred rendering to solve this point is to transfer the calculation of light for scene objects to the calculation of screen pixels. A large number of invalid calculations are reduced.

Here it is said that they render separately, where render refers to geometry data, interpolation from various transformations, and then stored in Gbuffer, which basically completes the geometry render task.

And the following is the task of lighting.

Delayed rendering here is to decouple this place. Geometry calculations are performed first, then stored, and then lighting calculations are performed.

So the light rendered here is deferred rendering is pixel light.

The difference between them under multiple light sources.

In fact, after rendering the geometry, it is equivalent to getting a screenshot of the model in the modeling software. The first is the modeling software, because it has not been colored, so the color is fine, and then the screenshot, because the geometry is processed, basically I did all the projection and so on, so it is already a 2D one.

Regarding parallel light, because this thing will illuminate the entire scene, when it is calculated, all pixels are directly calculated.

As for the spotlight, it is calculated that it can illuminate the range. Of course, this kind of light may be blocked, so just don’t count it.

Point lights and spot lights are similar.

Of course, in order to reduce some unnecessary fragment calculations, Internal-StencilWrite is used here, and I don’t know what it is for.

In addition, it is said here: spotlights and point light sources are used to calculate a part of the fragment. In fact, this can be judged based on the already rendered geometry and their depth information. However, there are many problems here. The specific details should be How is it implemented? ?

The general idea is to find a bounding box, and then project the bounding box to see which pixels the projection falls on, and then perform lighting calculations on these pixels.

During rendering, he will have a color inverted.

I don't feel like I understand

The information after geometry rendering is stored in Gbuffer.

There are differences in the format.

We can see some data stored in Gbuffer by modifying the display mode.

There is also this MRT, which is the content of Gbuffer.

When we use the following display mode, it is the mode of displaying normals.

Because this mode is deferred rendering, only deferred rendering objects will be displayed in the current mode.

Regarding the pass of delayed rendering, the basic format is similar to before, and the highlighted part above needs to be modified.

If we only follow the above modification method, we will not get a correct result. In the correct case, we should write the geometry data to the delay pass output result.

Instead of outputting the shaded result.

The actual output is: 4 buffers. Here we directly output the shading result, which is equivalent to filling only the first buffer.

The results shown above appear.

First of all, because of the depth, the things behind him are cropped, and the object itself may not be drawn for some reason, so the sky box rendered in the last step is displayed.

Of course, I don't quite understand this, right?

Let’s first look at the things that need to be output. Since the situation here is different, we need to encapsulate the output results, and internally define what the members have according to the rendering mode.

Capitalization issue.

When filling these gbuffers:

The first two channels are relatively simple, just some superficial attributes.

The third one is used to store attributes. The difference is that its A channel has only two bits, and the other RGB channels are all 10 bits, so the precision is higher, and A is not used.

The last buffer here is used to accumulate the results of the lighting of the scene, which is equivalent to our previous color buffer/

The results of the calculations that will be performed here will be accumulated in it. And there is one that does not accumulate, that is, self-illumination, so we need to add the result of self-illumination in this pass.

Only then will the final result be complete.

Of course, it must be complete and complete. This ambient light is also needed here. We only need to open the calculation of indirect light for deferred rendering here. In the end, we will use the indirect light in the result of gbuffer3

It's included.

Then the rest is the calculation result of each light itself, and the accumulation is the final actual effect.

Finally, we change the mode of the camera to LDR,

Just change HDR to off.

Then it turned out to be ridiculously wrong.

The reason is that this data has a specific decoding method. We only modified the mode and did not modify the corresponding decoding method, so the result was outrageous.

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

The topic of delayed rendering reflection, I forgot about reflection, so I didn't read it! Check back and review together! ! ! ! ! ! ! ! ! ! ! ! ! ! !

Guess you like

Origin blog.csdn.net/yinianbaifaI/article/details/127702653