Unity Getting Started Essentials 04(0)-More Complex Lighting

There are really many knowledge points in this book (

1.Rendering path

The rendering path in Unity has the following options: 

   

After specifying a rendering path for a Pass, Unity will automatically fill in system variables during the process for easy use.

1.1 Forward rendering path

   The pseudo code for forward rendering is as follows

In one sentence, forward rendering is: calculate all Passes (except Base Pass) for each light source that can affect the object within its range, and then superimpose the calculation results after calculating the lighting.

But this will cause huge performance consumption in the case of multiple light sources. Therefore, Unity usually automatically limits the lighting quality.

Of course, you can also set it yourself

 

 If it is set to Important in Render Mode, then the light source is a pixel-by-pixel light source.

Detailed reference can be found ↓↓↓

Forward rendering path - Unity Manual (unity3d.com)

1.2 Two Passes for forward rendering

  

For information about compilation instructions, please refer to

Declaring and using shader keywords in HLSL - Unity 手册 (unity3d.com)

Shader variants can be generated using these compilation directives

 Base Pass is the most basic Pass. A Base Pass will only perform one parallel light and all other per-vertex and SH lights.

How to use: Add to the first Pass

Tags { "LightMode"="ForwardBase" }
#pragma multi_compile_fwdbase//加入此编译命令,可以使之后光照衰减所使用的变量可以被正确赋值
//即能在BasePass中访问到正确的光照变量

 Write in the second Pass

Tags { "LightMode"="ForwardAdd" }
Blend One One//多光源叠加要开启混合,具体的混合操作可以看透明效果实现的那篇文章
#pragma multi_compile_fwdadd//此编译命令表示该Pass是用来进行多光源叠加的
//效果同上

Additional Pass will calculate all pixel-by-pixel light sources that affect the object, and each light source will be passed once.

In order to enable the Additional Pass to be mixed with the Base Pass, we enabled the mixing command. For specific mixing commands, please see the transparency effect section in "Getting Started with Unity".

The effect is as follows. We set the green one as directional light and the red one as point light source.

In the frame debugger, the rendering order is to call Base to render directional light first, and then call Additional point light source to render in sequence from far to near. 

1.3 Pixel-by-pixel, SH light and vertex lighting (already abandoned)

  Pixel by pixel is relatively simple, skip it, focus on recording the SH light source (passed in the interview at Rubik's Cube Studio)

Spherical harmonics are a set of functions distributed on a sphere, which can exactly describe the illumination of light on an object.​ 

Detailed reference: Spherical harmonic illumination - spherical harmonic function - Zhihu (zhihu.com)

2. Delayed rendering.

If there are a large number of real-time lights in the game, then forward rendering will be very expensive, because each light will calculate all the Additional Pass, which will cause a big problem (

Deferred rendering seems to only work with URP and SRP? (not sure)

So what is deferred rendering?

Delayed rendering is divided into two Passes. The first Pass channel is used to store information in the G buffer (actually output to a rendering texture).

Then use the second Pass to calculate lighting and synthesis.

Think of deferred rendering as a jigsaw puzzle. First, you need to put all the puzzle pieces (objects) on a large drawing board (buffer) according to certain rules (projection). Each piece has its own color, shape, texture and other information. This process is called geometry rendering.

Then, you need to place some light bulbs (light sources) on the drawing board. Each light bulb has its own color, brightness, direction and other information. This process is called light rendering.

Finally, you need to combine all the pieces on the drawing board with the information about the light bulb to form a complete puzzle (image). This process is called composition rendering.

Therefore, deferred rendering performance is better in real-time lighting and large amounts of lighting. But delayed rendering also has disadvantages, such as

· cannot support true anti-aliasing (because deferred rendering uses a lot of buffers to store information, and the resolution of the buffer is generally different from the screen space. If they are the same, the buffer will occupy a lot of bandwidth)

·Cannot handle translucency (because compared to forward rendering, delayed rendering places lighting calculations after the three-dimensional object is projected onto the two-dimensional texture. We need G-buffer during the delayed rendering process, and G- The buffer can only process the first mixed pixels and cannot record the pixels behind translucent objects)

·There are certain requirements for the graphics card, which must support MRT

The specific operation method of delayed rendering: use two Passes. The first Pass is used to render G_Buffer, which is used to store diffuse reflection color, specular reflection color, self-illumination, smoothness, normal, depth, metallicity and other information to render based on G buffer in screen space

 As shown in the figure, more details can be seen

LearnOpenGL - Deferred Shading

 The second Pass is used to calculate lighting. By default, only the Standard lighting model can be used. If you want to use another one, you must replace the original Internal-DeferredShaing.shader file.

Built-in variables and functions accessible to deferred rendering

 3.Light source type

Parallel light, only direction but no position

point Light

spotlight

Area light source

4. Light attenuation

  4.1 Light attenuation texture

Unity internally uses a texture named _LightTexture0 to calculate light source attenuation. If cookies are used for the light source, the falloff lookup texture is _LightTextureB0.

In layman's terms, this texture maps the light attenuation to the texture, and then queries it sequentially from (0,0) (closest to the light source) to (1,1) (farthest from the light source). In order to know what the color of a certain point on an object is when it is illuminated? We convert all the vertices on the object into the light source space (this reflects the core idea of ​​programming in Shader. To know the appearance of the object, we need to know the color of the object. To know the color of the object, we must calculate it according to the lighting model. To obtain the corresponding parameters in the lighting model, it should be converted to the corresponding space for calculation), and the lighting attenuation texture is queried in the light source space.

Complete light attenuation calculation code

#ifdef USING_DIRECTIONAL_LIGHT
    fixed atten = 1.0;
#else
	#if defined (POINT)//点光源
		float3 lightCoord = mul(unity_WorldToLight, float4(i.worldPos, 1)).xyz;
		fixed atten = tex2D(_LightTexture0, dot(lightCoord,lightCoord).rr).UNITY_ATTEN_CHANNEL;
				//利用顶点在光源空间的中的距离的平方进行采样(避免开方)
				//利用宏UNITY_ATTEN_CHANNEL获取衰减纹理中(_LightTexture0)衰减值所在的分量
	#elif defined (SPOT)//聚光灯
		float4 lightCoord = mul(unity_WorldToLight, float4(i.worldPos, 1));
		fixed atten = (lightCoord.z > 0) * tex2D(_LightTexture0, lightCoord.xy / lightCoord.w + 0.5).w * tex2D(_LightTextureB0, dot(lightCoord, lightCoord).rr).UNITY_ATTEN_CHANNEL;
	#else
		fixed atten = 1.0;
	#endif
#endif

Let’s focus on the spotlight (more complicated) 

Under the spotlight, _LightTextureB0 is the shadow texture, and _LightTexture0 stores the light intensity and spotlight color.

1.

(lightCoord.z ​​> 0) The code here is actually ambiguous, because when calculating spotlight illumination, we usually use a cone view cone instead of a frustum view cone, because the cone view cone is more suitable for representing spotlights illumination area. In a frustum, the vertex of the frustum is on the negative half of the z-axis, so the z-coordinate is always negative. And maybe the orientation has been redefined here. So the beginning here is to determine whether it is in the view frustum (for this knowledge, you can look at the perspective projection in Games101. Every time you project, you need to translate the object to the origin, and then rotate the camera to be positive (or the light source view cone). Then project, then translate and rotate back to the original appearance)

2. 

tex2D(_LightTexture0, lightCoord.xy / lightCoord.w + 0.5).w

The first tex2D function samples the light intensity and color information of the spotlight, lightCoord.xy / lightCoord.w The meaning here is that in order to change it from non-homogeneous coordinates to homogeneous coordinates, we convert the vertices of the object from world space to In the view frustum space of the light source, because matrix multiplication will assign a certain weight to w, the further away from the viewpoint, the smaller x, y, z, and the smaller the projection will be (here we need to understand the meaning of homogeneous coordinates, we (x, y, z, w) are called homogeneous coordinates. The points they actually represent should be (x/w, y/w, z/w), which are non-homogeneous coordinates. What is stored in w is the inverse of the vertex. Depth (1/z)), we use the transformation matrix to convert to the frustum space of the light source, and then use the projection matrix to project into two dimensions (because the texture is two-dimensional, it can be visually understood and compressed into two dimensions, and pasted on Find them one by one on the texture), but the x and y at this time are still homogeneous coordinates, so they need to be divided by w to eliminate the perspective effect caused by w and become the original non-homogeneous coordinates. Get the real two-dimensional coordinates, and adding 0.5 is guaranteed to be mapped to [0,1] (the range of the texture)

So why only take the w component? Because the tex2D function returns four RGBA channels, we take the fourth channel, which is transparency, to represent the light attenuation coefficient.

3. 

tex2D(_LightTextureB0, dot(lightCoord, lightCoord).rr).UNITY_ATTEN_CHANNEL

This shadow map is calculated based on the depth information in the light source space. Therefore, it is necessary to perform a dot operation on the coordinates in the light source space to obtain the depth value, and then take two R components to form UV coordinates, which are used to sample the shadow map texture. This is done because the shadow map may use different projections, which may result in shadow map sampling errors if dot operations are not used to obtain the depth value and calculate the UV coordinates.

Finally, use the macro UNITY_ATTEN_CHANNEL to obtain the component of the attenuation value in the attenuation texture we produced.

4.2 Calculate attenuation using mathematical formulas

 

 

Guess you like

Origin blog.csdn.net/qq_24917263/article/details/129538184