Study Notes 25

The first part I watched in rendering 16 before, was short of the last point, which was about transparent objects. That point was very mysterious, and I didn’t understand it at all.

And his operation is due to the difference in version, so I can't find the relevant operation button here at all.

Before using this light map for our own material here, we still have to check whether this is turned on, and we need to use a variant keyword.

The logic is: we can set whether to enable baking and other options outside, and then the enablement will affect the keyword generation of the shader, so when we judge in #if

It will decide whether to perform related calculations according to whether the external is turned on or not.

In addition, pay attention to the mutual exclusion between VERTEXLIGHT and LIGHTMAP he said here.

Since the lightmap is to be sampled. Then uv must be used, here is the second set of UV.

Sampling is performed through the second set of UVs, and before sampling, a conversion is required:

This is similar to the transformation of our previous texture coordinates, but when he said the reason here, he didn't understand something about texture unwrap. . .

Then there is the sampling work, and the sampling result is used as indirect lighting.

The sampling result here also needs a decode.

(Before we had a question, when to decode and when to encode, from this point of view, it is related to image storage, if an image requires storage in a certain format, we need to encode,

And there are some textures we get, he is encoded, so we need to decode him. )

Here is about our current use of lightmap still has problems, simple use is no problem.

He always thinks that our frontal objects are solid white and opaque, so there will be some problems when they are opaque or other colors.

This becomes more apparent when the opacity is changed to 0.

Here's why treat as opaque is as mentioned above, mainly because when he is baking, if he sees that you are translucent, he will refer to the alpha channel of _Color,

But we don't have it here, so we have to think it's opaque.

So the solution to this is very simple, just replace it.

For the shadow of this cutout, it is completely similar, and we also need to change the name.

Here, when baking, pay attention to the result of baking is indirect light. When we consider the baking effect, pay attention to the indirect light in the scene.

So here is about the color, what do you want the color of the object itself to do with indirect light, isn't it just the effects of color bleeding and so on.

But the colorbleeding here is always white, that is to say, when this lightmap processes the mapped light of all our objects, it considers

became white.

This is not as simple as changing the color, because the lightmap itself considers the color of the object including albedo, self-illumination, metallicity, etc.

These situations all determine the final result of color bleeding.

So if we want lightmap to give us a correct effect here, we need to tell him all these related things.

The previous ones include cropping and transparency. We also told it. The difference is that the name was agreed before, and then he will take out the corresponding value. Then according to the value to generate the effect of indirect light.

Specifically how to tell the relevant things when the lightmap is generated. A new pass is required.

Prepare some variables and functions that need to be used.

Because when these functions work, we have written a lot of conditions before, and we need to rely on these conditions for judgment. That is, keywords are required to take effect.

Next is the vertex.

In fact, it is good to understand the core here.

In fact, the core is the sentence in the red box, that is to say: first of all, our pass is to walk through the objects in the scene.

But the result we output is not to output the effect seen by the perspective projection from the perspective of the camera.

Instead we want to do a texture unwrap in the lightmap.

Here's where the fluff begins.

First, it doesn't matter what the objects in world space are at the beginning, and it doesn't matter what their positions are.

These are actually not directly related to the final rendering result.

The final rendering result depends on the position passed to the fragment and the attribute information required for coloring.

The former determines the content layout of the output graph, and the latter determines the specific content details.

So what we input is the scene, we can use it to render this texture unwrap in the lightmap.

How to do it? In fact, the most important thing is to figure out what the location of the output is.

The point corresponding to each object in our scene will have a corresponding point in this texture. This is the premise for us to sample lightmap.

So in fact, if we want this lightmap, we only need to get out each point, and each point corresponds to a point in the scene.

So the idea came, which is to transform the points in the scene into the points in the lightmap, and then pass them to the fragment as the position.

In this way we realized: shading indirect light, and then save it to lightmap.

Of course, some details are needed here, that is, to change the two-dimensional coordinates into three-dimensional coordinates. The two-dimensional is because the points corresponding to the vertices on the lightmap are all 2-dimensional coordinates.

Just write z as 0 here.

Here, due to the consideration of platform-related issues, a ternary operator is developed.

For the fragment, two outputs are required here, one albedo and one emission. pass is run twice. Regarding how to output each time, Unity helps us prepare

Related judgment interface:

It can be seen from his definition that he has made a judgment on albedo and emission. In addition, he has also performed some encode or pow, clamp

This simple transformation operation.

So, we need to pass in several quantities we have calculated. So the next task is to calculate the parameters passed in.

Here is a simple call.

Of course, there are still some self-illumination problems here, that is, when baking, if the object is self-illuminating, he has to go through the self-illumination calculation pass, and then output it to the lightmap

there will be results,

And this matter needs to be dealt with in the material to ensure that the self-illuminating object must go through the self-illuminating calculation pass.

There are some problems in the implementation here, mainly about the coordinate transformation.

I was a bit misunderstood, thinking that the sampling coordinates of the vertices were directly passed to the fragment as the clipping space coordinates.

But this is not the case. What the author wrote is to use the sampling coordinates as the object coordinates, and then perform MVP transformation. Then pass it to the fragment.

But doing this has no effect, and when debugging here, I directly wrote a return red in the fragment, but it didn't have any effect.

This means that the fragment did not come in at all. The reason will be explained later.

Then I searched the information and found the following way of writing:

MetaPass in Unity5 - Esfog - Blog Park (cnblogs.com)

Among them, the difference from us is mainly found in the coordinate transformation.

A packaged function is directly used here. The parameters are vertex coordinates and two uvs, and there are two built-in things.

find his implementation code

We have not defined this so-called EDITOR_VISUALIZATION here, and then he has two branches here, as mentioned in the link above,

In fact, x represents the static GI, which is our choice. The y component represents dynamic GI.

So the two uvs here actually mean that one is static and the other is dynamic.

So here we will enter the first branch, and then the operation inside is actually the same as ours, the only difference is that at the end we have to multiply the VP matrix instead of the MVP

The above said that returning to the fragment and directly writing a red color will not cause any reaction. In fact, this has something to do with our multiplication of an M matrix.

I thought that the M matrix output should be a zero matrix after debugging, so after multiplying the above, there is nothing, so it is all black, which can also explain the MVP multiplication.

But after actually debugging the M matrix, it turned out not to be.

Just remember to do this first, it's too weird.

Go back and learn renderdoc debugging

For the rough metal here, in fact, the reflection effect of its own color is more obvious, so this point is supplemented in the standard,

For the effect we achieved before, another question is raised here, which is about the surface details after baking the light.

The main reason is that it only cares about the vertex information of the surface when baking, and then gets the lighting result, so the performance of our normal to the details is lost.

So we want to recover the details, we need to take the normal information into account.

This gap is not a star and a half.

The method of turning on Directional here does have some improvements, but it is not very big. (The right in the picture above is improved)

The main reason is that the lightmap sampling frequency here is not enough. If the sampling frequency is high enough,

(In addition, direct light is not considered above, and indirect light is baked, so it is normal that the normal details are not obvious.)

After turning on this directional, he will produce one more graph.

Previously, our lightmap just recorded the intensity of the light received at a certain position in the scene, and then used this directly as the color brightness of the indirect light.

But now, one more map is generated, which matches the lightmap above, and the intensity is recorded on it, and a general direction information is added here. because after all it is

Indirect light, does not have an exact direction.

Then with the general direction of the light, when sampling the ambient light for the shading point, you can perform a shading calculation similar to diffuse, which takes the normal into account.

Of course, because the direction of the light is roughly, roughly means blur means filtering, means average, so the effect is slightly improved, but not very strong.

So here it is said that when there is a dominant light, the result will be better.

As always, consider the relevant compilation instructions first.

The wonderful thing here is about the texture variable, which contains data and sampling status, the latter can include some filter modes, etc.

Typically, a texture is both. But we can sometimes consider keeping only one for the sake of saving.

Here, since the sampling methods of directional and intensity are exactly the same, unity did not help us generate the sampling state when generating the former.

At this time, we need to use the same sampling state as intensity. Then the functions used here are transformed.

Because there is no sampling state of its own, it is necessary to clearly indicate whose sampling state is being used, so a macro is changed here, and an additional parameter is passed.

After getting it, there are some decoding operations and some other unit-length problems, and all these Unity functions have solved it for us, just call it directly.

I was stupid before, and compared it directly before using it, of course it has no effect.

After applying this normal here, you can actually see a great optimization, but there are still some differences compared to direct light.

The effect is getting better and better, and the problems are solved one by one. Now here comes the problem again.

It is about dynamic objects, which are completely unaffected by bake light, so when there is bake light and this dynamic object in the scene,

This dynamic object is a bit out of place. An extreme situation is given above, that is, when there is only bake light, the dynamic object is directly black.

In order to add ambient light to the dynamic light here, the probe, the light probe, is used here.

In the past, the calculation of ambient light was based on the global environment data around the current object, but now it is calculated using probes, and the light information of the surrounding environment is stored in the probes.

It can be understood as light-absorbing, he absorbs the surrounding ambient light, and then it illuminates dynamic objects.

This probe is made using spherical harmonics.

Here are some features about probes.

For example, each probe mentioned here will be partitioned, and then they will interpolate the spherical harmonic function according to the position, and they will treat the dynamic object as a point.

————————————————————————————————————————————————————————————————————————————————————————————————————

Rendering

There are a lot of limitations on the indirect light of the previous baked effect.

Both full realtime and full baked have their own limitations.

Here it is mainly said that indirect light is possessed by bake light, but not by real-time light. And she can bring good results, so consider a combine.

This is indeed feasible.

It is not enough to modify only one setting above, we also need to make a setting on the light itself.

Change the mode of the light to mixed.

Here to correct a previous cognitive error, I thought that all the indirect light was baked before, which is a big mistake.

Because the only light mode in our scene is bake, it is obvious that this parallel light is baked.

The reason why it is considered to be all the indirect light of bake is that when calculating in the shader, the result of sampling the lightmap is assigned to indirect.diffuse

In fact, just looking at this point seems to only bake indirect light, but it does not consider that the result of direct light calculation in the current pass is 0.

Because the light is set to bake, it is not considered when calculating the direct light.

So all the lighting is baked.

But when we change the light mode to mixed, we can only bake indirect light, and the direct light part is calculated in real time.

Therefore, for the indirect light part, there is the effect of darkening the lightmap above.

At this time, the problem comes again, the fade effect of the shadow is invalid, we can reduce the shadow distance to observe this more clearly.

The main reason is that Unity has made an update for some things about this shadow calculation.

Previously, our shadow operation was a trilogy. First, we made a coordinate variable, then coordinate transformation, and finally sampled the shadow map.

The first two macros are now changed. It becomes the above form,

Here is a noteworthy point: After the update, only the screen space shadow coordinates of the parallel light are put into the interpolator.

Also, the coordinates of the lightmap are used in a shadowmask. Here you need to provide the coordinates of a lightmap.

In addition, there is a bug here, so we must initialize the Interpolator.

Now we observe this shadow, there is still no fade effect.

Mainly because it is currently in mixed mode, with real-time lighting and baked lightmap effects at the same time.

No, we need to write it ourselves. Fortunately, we have already written related things when we delayed rendering, so we can directly assign values ​​here and make a slight modification.

Finally, we only need to manually do this fade when the above macro definition is required, so add a judgment.

Here, because bake indirect is expensive, a new mode is introduced, which is shadowmask, which also has baked indirect light

and real-time direct light.

What the former does: For all objects, it will go through in real time, regardless of static and dynamic. Then make a sample on the baked lightmap, and supplement the indirect light part

go up. This part only has indirect light, and it is only supplemented for static objects.

For the shadowmask mode, it will store indirect light and shadow attenuation in the lightmap.

In fact, to put it bluntly, it is a traditional lightmap, because storing light inevitably has shadow attenuation information. The lightmap that does not consider the existence of direct light is also

There are some shadows. That's the shadow here. In fact, it is a simple lightmap graph that records the value of indirect light.

Then there is shadowmask, which is a picture, which is similar to our previous screen space shadowmap, (a picture that is either 0 or 1)

When the indirect light is not considered, the normal calculation of the lighting, and then sampling the shadow map to make an attenuation, the lightmap here is equivalent to the former normal calculation of the lighting.

The latter is attenuation.

There are a lot of mistakes in the above ideas, and they are all farts. Let's summarize and sort out:

What is baked indirect light? What are baked shadows? How do objects receive baked shadows?

For a scene that only considers indirect light, imagine a ball and a floor. If only indirect light is considered, in fact, the color difference between the two is basically the same, and the bottom of the ball may be a little darker.

At this time, the entire scene is recorded in the map , which is the baked indirect light.

As for baking shadows, direct light needs to be considered. In fact, when it comes to shadows, the game considers direct shadows. That is, shadows produced by direct light.

And baked shadows consider a scene with baked light. At this time, he will illuminate the object. Since the ball blocks the floor from receiving light, the floor casts a shadow.

This is what we call baked shadows. One important point of this is that it needs to be bake light, and only bake light will be considered when generating lightmap .

At the beginning here, there was a question, that is, the baked shadows, have not all been generated when the lightmap is generated. direct sampling

You can get the result of the shadow, why do you need a shadowmask like a mask, which is either 0 or 1 to record the shadow area.

The key point of this misunderstanding is that it does not take into account that dynamic objects accept baked shadows.

First of all, static objects accept baked shadows, and you really don’t need to care about the shadow areas that are either 0 or 1 , because they are all baked. This is the kind of scene where the lighting is bake

How to render. All colors of static objects come from sampling.

But once dynamic objects are considered, there is no way to do it. There is no information about it in the baked map at all. Unable to sample.

And if you want him to receive baked shadows. Then you can only use the similar shadow processing method before, that is to prepare a shadowmask .

Indicates that those areas are baked shadow areas. Since this is the baked shadow area, this map can be completely pre-generated.

After pre-generation, when rendering, just sample it. For uniformity, static objects also use shadowmask calculations when considering the received baked shadows .

Instead of going to lightmap sampling.

Lighting Mode: Baked Indirect - Unity Manual

Now back to the article, in shadowmask mode,

In this sentence, indirect light and shadow attenuation are stored in lightmaps.

First of all, there is no doubt about the indirect light. What is the attenuation shadow mentioned here? In fact, it is the baked shadow, that is, the shadow generated by the bake light is baked in the lightmaps.

Then in the shadowmask mode, an additional texture will be created for the formation of the baked shadow, which is the shadowmask.

Then the following, what he said is a bit imprecise. To be precise, this shadowmask has nothing to do with dynamic objects, so all dynamic objects that are illuminated will appear in the shadowmask.

In this effect, no matter it is static or dynamic, there are less baked shadows. First of all, there should not be fewer static objects, because his baked shadow information can be obtained from the lightmap.

But it didn’t do that. We said before that in order to ensure unity, the shadow effect is done through shadowmask sampling just like dynamic objects.

We have not done the work of sampling shadowmask so far, so there is no effect of this baked shadow at all.

In fact, the baked shadow should not be saved in the lightmap, one is because there is no need to sample the baked shadow at all, and the other is that if it is saved, it will affect the

Sampling of indirect light affects.

So next we need sampling to supplement this baked shadow effect.

First of all, this is first sampled to obtain bakedAtten. In fact, this is either 0 or 1, similar to the screen space shadowmap, and this value is used as the attenuation of the baked shadow.

Then combined with attenuation, this is the attenuation of real-time shadows. There is also a variable about the edge shadow fade, and these three are considered comprehensively to get

The final falloff value.

Pay attention to the document screenshot above, pure shadowmask, static objects have no real-time shadows, only baked shadows, pay attention to the difference from the previous bake indirect

And the distance shadowmap will be there.

For dynamic objects, real-time shadows will be cast, so here it is:

What is said here is that static objects are subject to real-time shadows and baked shadows of dynamic objects at the same time.

And here comes another problem, which is the boundary problem of baked shadows.

Supplement: Dynamic objects don't have the LightMap_On macro at all, so they won't sample the lightmap.

When considering shadowmask in deferred rendering, in fact, only one more shadowmask image needs to be stored, and the rest are completely similar to the previous shadows.

The later judgment here mainly depends on whether the platform is OK, because the output gbuffer is stored using the rendering target, and the number of rendering targets is limited.

When it is used here, it is dot with a selector here, mainly as mentioned above, a mask image is all red, because the data is stored in the R channel

Related, here is to take out the shadowmask of a certain light under the shadowmask. Then as attenuation.

Then about this fade, first of all, the shadowFade of the ball is calculated according to the light source type of the current pass. According to the old algorithm,

We are making a modification to shadowAtten, which is the direct shadow attenuation of the current light.

In fact, the indirect shadow attenuation is brought in here, and then a function is changed to do the final calculation.

おすすめ

転載: blog.csdn.net/yinianbaifaI/article/details/127702725