Study Notes 18

It's easy to understand here. First of all, when our shadowmap has a low resolution, the shadow map itself will have some jagged edges. And this sawtooth is sampled in the shadow map to calculate the shadow process. In fact, we can regard it as the shadow generated by lighting the silhouette in the shadow map.

Even if the silhouette is illuminated with light, the silhouette itself is jagged, and the generated shadow will naturally be jagged. Or just directly understand that the shadowmap you sampled is jagged, which means that there will be a sudden change in the shadow. This will introduce aliasing.

Or directly understand that the generated screen space shadow map is jagged, so there will be jagged after multiplication.

Here to get this kind of jagged shadow, one operation is to reduce the resolution of the shadow map, and another point is no cascade. Because cascade means classification, that is, for nearby objects, a relatively high-resolution shadow map is used, and for distant objects, a low-resolution one is used. This also helps us alleviate the aliasing to a certain extent.

The way to improve the effect here is to adjust the shadow distance.

Shadow distance affects the size of the scene captured when generating the shadow map.

If this value is not very large, then there is no need to consider how far away scenes look in the shadowmap, so you don't have to consider those far away scenes during orthographic projection.

In this way, the shadow map of our entire vicinity is larger. It is equivalent to the higher resolution of the nearby shadow map. Because you're rendering less content with the same resolution, that content takes up a higher resolution.

After the resolution is high, the aliasing of the shadow map itself is alleviated, and the aliasing of the naturally generated shadows is also alleviated.

This quality improvement is due to the use of cascade, so for the generation of close-range shadowmaps, a relatively high resolution is used. Therefore, the aliasing effect will be less, but as the distance becomes longer, the sampling resolution will gradually decrease, and the aliasing will gradually become more obvious. But the farther away, the smaller it will be displayed in the scene view, so the result will not be obvious, so the effect is still greatly improved.

What I'm talking about here is to display the cascade level.

Here you can modify the display mode.

Here you can configure the cascade grading standard. The former is based on the distance from the camera, and this is directly based on the depth. Should be the z value in camera space.

Here is the difference between the two configurations in the case of camera movement. For the first stable fit, the effect is relatively good, as long as you don't pay attention to the edge of the level, there is no problem.

But it doesn’t work for close fit, and shadow edge swimming will appear when he is exercising.

Here it can be considered that the light source is from top to bottom, and then the gray upper surface can be regarded as the points closest to the light source, but it is not a point, it is a Texel, and a Texel is an area. This is a problem. The top is a Projected into 2D, we look at the upper surface, some of which are directly stick in or through or out of surface, that is, the line on the gray upper surface, part of which is outside the green, and part of which is inside the green.

In this way, when looking at the green object from the perspective of the camera, the actual situation should be that the upper two surfaces are directly illuminated by light, and there will be no shadows. However, when comparing the depth with the depth of the shadow map, he has When the depth of some places is judged according to the stored shadow map, it is judged to be in the shadow.

So there is a surface shadow bug.

Explanation of the Shadow acne phenomenon in ShadowMap- Zhihu (zhihu.com)

It's clearer here,

Basics of Graphics Rendering (6) Real-Time Shadows - Nuggets (juejin.cn)

Another reason here is the precision problem, because here the float is converted back and forth, there will inevitably be some loss of precision, so it may lead to misjudgment in the shadow, and it will also lead to the acne above. But the above said that when the small distance is used, this is really not very easy to get

Of course the solution is to use the bias offset. And each light can set its own bias individually.

The setting of bias here also needs attention. If it is too large, it will lead to the above effect, peter panning. The main reason is that we have made a bias offset to the shadow map. In fact, it is to push the depth of the entire shadow map farther away. , then the naturally generated shadows will also push far away, which will cause the object and the shadow to be connected, but they will not be connected.

Here is about bias offset. Another one is along the normal line of thinking, which will also have corresponding shortcomings.

Finally, how to choose the bias, as well as the setting of the specific value, need to be determined according to the situation. There is no uniform standard.

 

The problem of anti-aliasing here basically says that MSAA seems to be useless for screen space shadowmaps without triangle edges. I don't understand

There is one thing to emphasize here. In fact, what bias brings cannot be said to be the offset of the shadow. According to the above, he pushes the ball along the direction of the light. What behavior does the shadow cause?

In fact, it is equivalent to the current ball moving along the direction of light in the actual space. The shadow at this time is equivalent to the shadow after bias offset, so look at the ball above, from bias equal to 0, and then bias along the light direction Move, the ball will directly touch the ground, and then gradually be submerged, so his shadow will gradually become smaller, and finally even disappear.

And if the ball is high enough, then the shadow projected on the floor will have no effect as the bias increases. Why? Because it is high enough, after moving along the direction of the light, and then projecting along the direction of the light, the result is still the same, but once the distance along the direction of the light reaches the contact with the floor, then the bias increases again, and the shadow The size and position will change accordingly.

When implementing linear bias, in fact, because it is a

UnityApplyLinearShadowBias — method of calculating shadow bias in Unity |

Technical analysis of Unity Standard Shader - Zhihu (zhihu.com)

I probably understand it here, let me talk about the general process first: what are we going to do in Shadow Caster?

It can be known in the frame debugger that the whole process will generate the shadow map at the beginning, then when the shadow map is generated, it is the time for this pass to function, and this pass is used to generate the shadow map.

The shadow map is aimed at the light source, so all objects facing a light source, as long as they have a shadowcaseter pass, will go through the shadow caster at this time, and the depth test is also performed here, and the result is finally written in the shadow map.

Then for each object, the vertices will be transformed to the clipping space, then the bias will be offset, and then it will be returned to the world space, and then the depth value will be judged to decide whether to participate in the generation of the depth map.

So the timing of the bias is in the projection space, naturally as follows:

The first sentence is easy to understand, and it says a sentence to compensate for perspective projection.

Regardless of the specifics, because bias is an offset in perspective space, no matter where your object is in this space, the offset should be the same, but it must be different if you offset it directly, because of the nonlinear space, so This one compensation is needed.

After the compensation is completed, there is also a problem of preventing out-of-bounds. The last point is the classification of light source types.

Then use normal to carry out bias, this is to shrink inward along the direction of normal.

The specific shrinkage is mainly related to the angle between the light and the normal.

Because our shadow acne is caused by the inclination, if it is facing directly, it will alleviate a lot, so here when the inclination is relatively large, there should be a larger bias for offset, and the opposite is true for small angles.

So the sin function proportional to the angle is used here as the scaling ratio.

In addition, the solution here is to perform a linear offset after the normal offset in the world space.

Here is when we define ShadowScreen, when we enter Attenuation again, we will calculate shadow-related things, and when sampling, there is a problem here, because we passed a 0, and the shadow was not considered at that time.

His set is Unity's own shadow algorithm.

I don’t know why here, I wrote it to accept the shadow, and then directly render the shadow map to me after getting close to the object.

The above situation can be solved by adding a ForwardBase. As I said before, although it can run without defining this, it will often cause some macros to be assigned abnormally.

Regarding the problem of multiple shadows, first of all, when using multiple light sources, you must ensure a setting: the number of pixel-by-pixel lights. When playing with multiple light sources, it was changed to 0. Then the system will only give one pixel-by-pixel light by default, so Others are per-vertex or spherical harmonic lights, which seem to have no shadows.

Anyway, after experiments, the standard material does not support the shadow of vertices or spherical harmonic lighting. Then when the number of lights per pixel is changed to 4, the standard material can directly have multiple shadows.

All spheres harmonize with all vertices + main parallel light (must be parallel light, if not, it will be regarded as 0, go through the empty base) and go through the base pass, and then the remaining pixel-by-pixel light, each with an add pass.

Sometimes after the light source is added, the default is no shadow, which must be paid attention to.

Then if you want your own material to support this kind of multi-shadow: you need to add:

Here you can add SHADOWS_SCREEN or the above precompiled instructions, the difference is that the former only serves the multi-shadow of parallel light, and the latter can be any lighting type.

The shadow of the point light source needs a SHADOW_CUBE, so add a precompiled instruction here.

Damn, the inclusion of the header file here can confuse me, but fortunately it is clear now. Just two points to note:

First of all, you see that the order you include is first A and then B, but it is not necessarily the case. Maybe due to the nesting of files, B has been included before, and you come to A first and then B, and finally get B first. a. So pay attention to the inclusion of your previous header files before including.

Secondly, the decode code here is actually not explicitly called by us. This function is called by whom, it should be called in the pass of base and add, which involves the sampling operation of shadowmap, and we The encoded encode is in the shadowcaster pass, so I have been struggling with the header files in this pass, and it was useless in the end.

Unity's so-called softshadow is a filter, but here it is not even a filter.

Since this is stored in the cubemap, there is no way to filter, so the four samplings used here and then finding the average are actually filtering to a certain extent, but the four samplings consume more than the filter.

This is the result of the actual filter.

When implementing sampling by yourself instead of using the set of macros packaged by Unity, the general process: declare the coordinates of the sampling, and then we need to obtain the clip space coordinates as the sampling coordinates, but some transformations are required, especially pay attention to the clipping space There is no perspective division, and our ultimate goal is to get the screen coordinates for sampling.

But here interpolation and perspective division here, we have to do interpolation first and then perspective division. Because the image we want is perspective, we interpolate in the perspective space and then divide. (specifically why, I don't think very clearly)

About highlights:

Depth passes without pre-operations generate depth maps. Only generate a shadowmap in advance.

This has something to do with the specific location, so the projection is also a perspective projection, and naturally cascade cannot be used.

Regarding the projection, the main thing is that the bias has a little bit. Normal bias is only available for parallel lights, and point lights and spotlights only have linear bias. the effect of)

Regarding sampling, there is no shadow map in the screen coordinate space, only the ordinary shadow map, so it is the traditional one: transform from the model to the world, and then transform to the perspective space under the light perspective to perform sampling.

For linear bias: first transform from model space to clipping space, then bias, and finally output.

Normal bias, model space, to world space, normal bias, clip space, and then apply the linear bias interface, linear bias from the clip space, output.

————————————————————————————————————————————————————————————————————————————————————————————————

Rendering 8

First of all, this is why we want to study this, that is, the material written by our previous PBR has a very good effect on a rough surface, but when it is smooth and the metal is high, it will be rotten, and the effect will be complete. Outrageous.

In fact, the core question is:

For smooth metallic objects, there is almost no diffuse reflection, and specular reflection can only be observed at a specific small angle, and we can see it mainly because of: ambient light. Countless ambient light, ambient light from all directions enters the human eye through reflection.

To be precise, it is the specular reflection part of the ambient light or indirect light, because the indirect light is actually from different sources. For the smooth material mentioned here, the indirect light is mainly the specular reflection part. Here we can see it from any angle. To, because it is indirect light, coming from all directions.

Then here we take into account the ambient highlights that were not considered before, and set it to red. After setting, our red actually represents the reflectivity. Why do you say that, because red is the color of ambient highlights, the redder the color, the more reflections, isn't that the reflectivity is high.

And there is a strong Fresnel effect on the edge of the upper part, this is also because our pbr shader uses some things packaged by unity, including the written fresnel

The meaning here should be: because if the light is affected by shadows, the light itself will be darkened by the cast shadows. That is to say, if it is affected by the shadow, the ambient highlight in the shadow area should also be darkened, but it is not affected, and because there is a light and dark contrast with the shadow, it appears brighter.

Complete our initial sampling.

Here we know that the HDR format is relatively bright, and there will be a coefficient for brightening. If we want to get an RGB format without highlighting, we need to decode it.

After decoding here , I have a problem. It is normal without decoding . After decoding :

How to sample HDR texture in shader? - Unity Forum

After reading the decode on the Internet, it does the same thing. Fuck it's so rare.

The one above is just normal sampling and has nothing to do with reflection. Reflection is introduced here.

After the introduction of reflection, the reflection of the sky ball is not enough, here we need to add the reflection of the actual surrounding objects, so add a reflection probe.

The icon here is a bit in the way, we can turn it off for him.

The above is mainly to introduce this probe, and its real-time seems to have more configurations after it is checked.

The first one, more parameters, is to control when to refresh the cubemap obtained by the probe, and one is to refresh it every time it becomes active. (This active state is unclear).

After a general look, it is about some settings in c#. Then the second is refreshed every frame, and the third is completely controlled by the user script.

The second parameter is more intelligent. It can refresh the process of obtaining the cubemap once, and spread it over several frames. Some have 9 frames, some have 14 frames, and one is not dispersed. Refresh the entire cubemap in each frame. .

Because the calculation of the cubemap needs to render 6 frames to be a cube, this is a very large consumption, and the real-time consumption is even greater, so these attributes are basically used to reduce some of the real-time rendering pressure.

The editmode mentioned here should be this kind.

In short, he chose bake in the end. In fact, bake is basically enough.

In addition, this bake will only be considered for static objects, so it needs to be changed to static. When changing static, you can check it at once, or you can set a certain item static separately according to many subdivided items.

 

We adjust the roughness and metalness to have little effect on the degree of blur, and after adjustment, it will have a great impact on the lighting.

In fact, this is unreasonable. In the actual situation, the change of smoothness should have an impact on the blurring of the reflection. This makes me have some doubts about the internals of PBR. This should not be done by PBR. live? Why didn't you do it?

In fact, if you think about the principle of fuzziness, you can understand it. The essence of fuzziness is that there is no clear boundary with the surrounding things, just the feeling of being mixed together and appearing fuzzy. We usually use convolutional filtering to achieve blurring.

No matter how powerful PBR is on the GPU, it only operates on a single pixel, so there is nothing he can do to blur the effect. It should be achieved by some post-processing or operations like MSAA.

But the smoothness is not used for blurring and other operations. What is it used for? In fact, I really want to find out.

So mipmaps are used to simulate imperfect reflections.

But here Unity uses another algorithm in order to consider a good transition between different degrees of blur:

Get roughness from smoothness. To determine the level of the mipmap that needs to be used.

Here is not simply determined by roughness, because there is no linear relationship between roughness and blur.

Then about the nonlinear conversion of the above roughness, sampling cubemap, and HDR operations are all included in a function encapsulated by unity.

Although he performed HDR again here, my effect did not become abnormal as before, but I compared it, and the result obtained by not decoding HDR and using its macro is the same, which means this The part of the macro that decodes HDR didn't work for me.

There are some additional algorithms here that I don't understand, mainly due to the internal consideration of various platforms.

Guess you like

Origin blog.csdn.net/yinianbaifaI/article/details/127607885