unity-shader (intermediate)

1. More complex lighting

1. Unity rendering path

  • Forward
  • Deferred
  • Legacy Vertex Lit

2. LightMode

  1. LightMode is used to define the role of the Pass channel in the Unity rendering pipeline. The following are its optional input parameters.

    label name describe
    Always Regardless of the rendering path used, the pass will always be rendered, but no lighting will be calculated
    ForwardBase For forward rendering. This pass calculates ambient light, most importantly directional light , per-vertex/SH lights and Lightmaps
    ForwardAdd For forward rendering. This pass computes additional per-pixel lights, one for each pass.
    Deferred Used for deferred rendering. This pass will render the G-buffer (G-buffer)
    ShadowCaster Render the depth information of the object into a shadow map texture or a depth texture
    PrepassBase (Obsolete) Deferred rendering. This pass renders the exponential part of the normal and specular.
    PrepassFinal (Obsolete) Deferred rendering. This pass will render the final color by combining textures, lighting and self-illumination
    Vertex, VertexLMRGBM和VertexLM (Obsolete) Vertex Lighting Rendering
  2. The rendering mode of LightMode is closely related to the rendering path of the camera parameters. When one rendering method is used, the other is invisible.

  3. All lights have an option called: Render Mode , which means how the light is rendered.
    When the light source is set to Import, it is a per-pixel light source. (Unlimited and pixel light count in quality settings) When the (Forward Add)
    setting is important: the light source will be forced to be used for forward rendering, and an additional per- pixel light source (ForwardAdd) will be calculated.
    When the setting is not important: the light calculates an additional per- vertex light (Forward Base).
    When Auto is set: The limit on the number of important light sources depends on the value 项目设置->质量->像素光源数量set .

  4. The most important directional light is the per-pixel light source. (Forward Base)

  5. Unity requires no more than 4 per-vertex light sources by default, and the excess is processed by SH (spherical harmonic function [ambient light]) light source

3. Stencil (Stencil Test/Mask Test)

1 Overview

Similar to the depth test, the transparency test determines whether a fragment is thrown away. The comparison data of the depth test is in the depth buffer, the comparison object of the transparency test is the value in the color buffer, and the comparison data of the stencil test is in Stencil, and the stencil test must be before the depth test and transparency test, before the fragment function. Perform template testing.

parameter describe
Ref It is the reference value. When the parameter allows assignment, the reference value will be assigned to the current pixel.
ReadMask Perform mask operation on the current reference value and existing value, the default value is 255, generally not used
WriteMask Write Mask operation, the default value is 255, generally not used
Comp Compare methods. It is to compare the Ref reference value with the value on the current pixel buffer. Default valueAlways
Pass Process when both the stencil test and the depth test pass
Fail Handle when both the template test and the depth test fail
ZFail Process when the template test passes but the depth test fails

parameter:

  1. Comp :
    Always
    Greater - greater than
    GEqual - greater than or equal to
    Less - less than
    LEqual - less than or equal to
    Equal - equal to NotEqual
    - not equal to
    Always - always pass
    Never - never pass
  2. pass, Fail, ZFail:
    Keep keep (that is, don't assign the reference value, just ignore it)
    Zero return to zero
    Replace (replace the original value with the reference value)
    IncrSat value increases by 1, but does not overflow, if it reaches 255, it will no longer be Add
    DecrSat value to decrease by 1, but not overflow, when the value reaches 0, it will no longer decrease
    Invert Invert all bits, if 1 will become 254
    IncrWrap value will increase by 1, it will overflow, so 255 will become 0
    DecrWrap value will decrease by 1, will overflow , so 0 becomes 255
2. Example
  1. ZTest makes the object at the top of all objects, and is modified _refValto determine the front and rear rendering levels. _refValThe larger the material, the higher the rendering.
Tags
{
    
    
    "LigehtMode" = "ForwardBase"
    "Queue" = "2"
}
ZTest Always

Stencil
{
    
    
    Ref[_refVal]
    // 比较成功条件,大于等于
    Comp GEqual
    // 条件成立 参考值赋给当前像素 stencil enable
    Pass Replace
}
  1. Set different Queue levels of objects. The lower level has a higher level, and is _refValwritten to Stencil first. In this way, the higher level object cannot be written normally because the Stencil _refValdoes not have a high cache value.
    Set low-level objects ColorMask 0to achieve a hollow-like effect

hollow body

Tags {
    
     "RenderType"="Opaque" "Queue"="Geometry+1" }
LOD 100

Pass
{
    
    
    Tags {
    
     "LigehtMode"="ForwardBase" }
    ColorMask 0
    Stencil
    {
    
    
        Ref[_refVal]
        Comp GEqual
        Pass Replace
    }
...
Tags {
    
     "RenderType"="Opaque" "Queue"="Geometry+2" }
LOD 100

Pass
{
    
    
    Tags {
    
     "LigehtMode"="ForwardBase" }
    Stencil
    {
    
    
        Ref[_refVal]
        Comp GEqual
        Pass Replace
    }
...
  1. Set Comp in Stencil to Equal and set the level higher. This way the object can only leave a cropped image in the Stencil written ahead of time and _refValin the same object as the object.
    A mask-like effect can be achieved.

masked

Tags {
    
     "RenderType"="Opaque" "Queue" = "Geometry+1"}
LOD 100

Pass
{
    
    
    Tags {
    
     "LigehtMode" = "ForwardBase"}
    Stencil
    {
    
    
        Ref[_refVal]
        Comp Equal
        Pass Replace
    }

mask

Tags {
    
     "RenderType"="Opaque" "Queue"="Geometry" }
LOD 100

Pass
{
    
    
    Tags {
    
     "LigehtMode"="ForwardBase" }
    Stencil
    {
    
    
        Ref[_refVal]
        Comp Always
        Pass Replace
    }

4. Forward rendering

1 Overview

insert image description here
As shown in the figure, there are only 4 ForwardAdd and 1 ForwardBase among the 7 point light sources.

insert image description here

2. Implementation principle
  1. As the primary pixel-by-pixel light source rendering, a new Pass channel needs to be created to implement pixel light source rendering separately. and setTags { "LightMode"="ForwardAdd" }
  2. The blending mode is turned on in the Additional Pass, and if it is not turned on, the previous rendering will be replaced. Usually used Blend One One.
  3. Defined in the Pass channel #pragma multi_compile_fwdaddso that the channel can accept other light sources other than the world directional light, light attenuation values, etc. The light source of this Pass channel has no shadows by default. Can be used #pragma multi_compile_fwdadd_fullshadowsto turn on shadows instead.
  4. The default light is not to decrease the light intensity with increasing distance, so a set of functions is needed to determine the light intensity.
    • Import the library:#include "AutoLight.cginc"
    • LIGHTING_COORDS(x, y): It declares both the sampling coordinates for shadow textures (_ShadowCoord) and the sampling coordinates for attenuation textures (_LightCoord2). Parameters (x, y) refer to TEXCOORD(x, y)
    • TRANSFER_VERTEX_TO_FRAGMENT(o);: Works in conjunction with the macro LIGHTING_COORDS, it will calculate the specific value of the light source coordinates according to the light source type (spot or point or directional) processed by the pass, and perform shadow-related calculations, etc. It mainly calculates the distance between the light source and the object, which is convenient to judge the light attenuation in the LIGHT_ATTENUATION function. Note: The variable whose semantics is SV_POSITION in parameter o must be named pos, because the auxiliary function TRANSFER_VERTEX_TO_FRAGMENT will specify to call this variable when calculating shadow coordinates.
    • LIGHT_ATTENUATION(i);: Works with the macro LIGHTING_COORDS to calculate the light attenuation value.
  5. In the original Pass channel, the vertex-by-vertex light source and spherical harmonic function [ambient light] are implemented. In addition to setting the label Tags { "LightMode"="ForwardBase" }, it is also necessary to define #pragma multi_compile_fwdbaseso that the channel can accept other light sources other than world directional light.
  6. When calculating, it needs to be judged in the vertex shader as LIGHTMAP_OFF(when the lighting rendering is turned off and it is considered to be an unimportant light source) and VERTEXLIGHT_ON(when the per-vertex light source is turned on)
  7. Use ShadeSH9(float4(v.normal, 1.0));spherical harmonics to generate soft ambient lighting in place of regular ambient lighting. Regular ambient light is used if LIGHTMAP_OFF ​​is not passed.
  8. Compute 4 per-vertex lightsShade4PointLights(lpos, lpos, lpos, lrgb, lrgb, lrgb, lrgb,latten, pos, normal);
2. Example:
Tags {
    
     "RenderType"="Opaque" }
LOD 100

Pass
{
    
    
    Tags {
    
     "LightMode"="ForwardBase" }
    CGPROGRAM
    // 只有使用了这样的指令,才可以在相关的pass中得到其他光源的光照变量,例如光照衰减值等。
    #pragma multi_compile_fwdbase
    #pragma vertex vert
    #pragma fragment frag

    #include "UnityCG.cginc"
    #include "UnityLightingCommon.cginc"
    
    fixed4 _Diffuse;
    fixed4 _Specular;
    fixed _Gloss;

    struct v2f
    {
    
    
        float4 vertex : SV_POSITION;
        float3 worldNormal : TEXCOORD0;
        float3 worldPos : TEXCOORD1;
        float3 vertexLight : TEXCOORD2;
    };

    v2f vert (appdata_base v)
    {
    
    
        v2f o;
        o.vertex = UnityObjectToClipPos(v.vertex);
        o.worldNormal = UnityObjectToWorldNormal(v.normal);
        o.worldPos = mul(unity_ObjectToWorld, v.vertex);

        // 宏判断,当光照渲染被关闭(被认定为是不重要的光源时)
        #ifdef LIGHTMAP_OFF
            // 球谐函数,可用于生成柔和的环境光照,取代常规环境光。
            float3 shLight = ShadeSH9(float4(v.normal, 1.0));
            o.vertexLight = shLight;
            // 宏判断,当开启逐顶点光源
            #ifdef VERTEXLIGHT_ON
                // 4个逐顶点光源
                fixed3 vertexLight = Shade4PointLights(
                    unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0, 
                    unity_LightColor[0].rgb, unity_LightColor[1].rgb, unity_LightColor[2].rgb, unity_LightColor[3].rgb,
                    unity_4LightAtten0, o.worldPos, o.worldNormal
                    );
                o.vertexLight += vertexLight;
            #endif
        #else
            // 环境光
            o.vertexLight = UNITY_LIGHTMODEL_AMBIENT.xyz;
        #endif

        return o;
    }

    fixed4 frag (v2f i) : SV_Target
    {
    
    
        fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
        // 漫反射
        fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * saturate(dot(i.worldNormal, worldLightDir));
        // 高光
        fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
        fixed3 halfDir = normalize(worldLightDir + viewDir);
        fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(saturate(dot(i.worldNormal, halfDir)), _Gloss);
        return fixed4(diffuse + specular + i.vertexLight, 1);
    }
    ENDCG
}

Pass
{
    
    
    Tags {
    
     "LightMode"="ForwardAdd" }
    // 开启了混合模式,如果不开启就会替换之前的渲染
    Blend One One
    CGPROGRAM
    #pragma multi_compile_fwdadd
    #pragma vertex vert
    #pragma fragment frag

    #include "UnityCG.cginc"
    #include "UnityLightingCommon.cginc"
    #include "AutoLight.cginc"

    fixed4 _Diffuse;
    fixed4 _Specular;
    fixed _Gloss;

    struct v2f
    {
    
    
        // 该变量必须命名为pos,因为辅助函数TRANSFER_VERTEX_TO_FRAGMENT在进行阴影坐标计算的时候,会指定调用该变量
        float4 pos : SV_POSITION;
        float3 worldNormal : TEXCOORD0;
        float3 worldPos : TEXCOORD1;
        // 它同时声明了用于阴影纹理采样坐标(_ShadowCoord)和用于衰减纹理采样坐标(_LightCoord2)。(2, 3)指的是TEXCOORD(2, 3)
        LIGHTING_COORDS(2, 3)
    };

    v2f vert (appdata_base v)
    {
    
    
        v2f o;
        o.pos = UnityObjectToClipPos(v.vertex);
        o.worldNormal = UnityObjectToWorldNormal(v.normal);
        o.worldPos = mul(unity_ObjectToWorld, v.vertex);
        // 与宏LIGHTING_COORDS协同工作,它会根据该pass处理的光源类型( spot 或 point 或 directional )来计算光源坐标的具体值,以及进行和 shadow 相关的计算等。主要计算光源与该对象之间的距离,方便在LIGHT_ATTENUATION函数中判断光照衰减。
        TRANSFER_VERTEX_TO_FRAGMENT(o);
        return o;
    }

    fixed4 frag (v2f i) : SV_Target
    {
    
    
        // 未必所有的光都是平行光,因此必须要使用该函数确保获取到正确的光源方向
        fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
        fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * saturate(dot(i.worldNormal, worldLightDir));
        // 高光
        fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
        fixed3 halfDir = normalize(worldLightDir + viewDir);
        fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(saturate(dot(i.worldNormal, halfDir)), _Gloss);

        // 与宏LIGHTING_COORDS协同工作,计算光照衰减,默认光不会随着距离增加而减小光强,因此需要该函数减小光强。
        fixed atten = LIGHT_ATTENUATION(i);
        return fixed4((diffuse + specular) * atten, 1);
    }

    ENDCG
}
3. Additional points
  1. Some lighting features are supported in Base Pass, such as lightmap.
  2. Ambient light and self-illumination are calculated in Base Pass.
  3. Forward rendering generally defines a Base Pass (except for double-sided rendering, etc.) and an Additional Pass. A Base Pass will only be executed once, and an Additional Pass will be called multiple times according to the number of other per-pixel light sources affecting the object, that is, each per-pixel light source will execute an Additional Pass.
  4. The Pass ForwardAdd needs to be used with ForwardBase , otherwise it will be ignored by Unity (unity5.x). In the new version, it will not be ignored, but the rendering will be wrong.
  5. Under the Forward and Deferred rendering paths, the Forward Pass can be rendered normally.
  6. ForwardAdd will perform a Pass for each pixel-by-pixel light source, so do not calculate the data in unity_4LightPos[x,y,z]0 in ForwardAdd. will be counted repeatedly.

4. Deferred Rendering

1 Overview
  1. Delayed rendering mainly includes two Passes.
    In the first Pass, we do not perform any lighting calculations, but only calculate which fragments are visible, which is mainly achieved through depth buffer technology. When a fragment is found to be visible, we put its correlation The information is stored into the G-buffer.
    In the second Pass, we use the various fragment information of the G buffer, such as surface normal, viewing direction, diffuse emission coefficient, etc., to perform real lighting calculations.

  2. shortcoming:

    • No real antialiasing is supported.
    • Can't handle translucent objects.
    • There are certain requirements for the graphics card. If you want to use deferred rendering, the graphics card must support MRT, Shader Mode 3.0 and above, depth rendering textures, and double-sided stencil buffers.

    advantage:

    • All lights are per-pixel sources. The computational complexity is O(m*n) for forward rendering and O(m+n) for delayed rendering.
    • For post-production processing, etc., the depth value can be directly obtained.

G-Buffer

  1. The first Pass is used to render the G buffer. In this Pass, we will render the object's diffuse color, specular emission color, smoothness, normal, self-illumination and depth into the G-buffer in screen space. For each object, this Pass will only be executed once.
    The second Pass is used to calculate the real lighting model. This Pass will use the data rendered in the previous Pass to calculate the final lighting color and store it in the framebuffer.

  2. The default G buffer (note that the content of the render texture storage in different Unity versions will be different) contains the following render textures (Render Texture, RT).

    RT0: The format is ARGB32 (8 bits per channel), the RGB channel is used to store the diffuse color, and the A channel is used to store the occlusion.
    RT1: The format is ARGB32 (8 bits per channel), the RGB channel is used to store the specular color, and the A channel is used to store the exponential part of the specular.
    RT2: The format is ARGB (2AAA), the RGB channel is used to store world space normals, and the A channel is not used.
    RT3: The format is ARGB (2AAA) / ARGBHalf (16 bits per channel), (high dynamic lighting rendering / low dynamic lighting rendering) is used to store self-illumination + lightmap + reflection probe depth buffer and stencil buffer.

    When calculating lighting in the second pass, by default only Unity's built-in Standard lighting model can be used. Its configuration information is in the position shown in the figure
    insert image description here

2. Implementation principle
  1. As the Pass channel for rendering G buffer, it needs to be setTags { "LightMode"="ForwardAdd" }
  2. Only one pass needs to be implemented in the shader block, and the second pass used to calculate the real lighting model is set in Unity.

Deferred Rendering - Render Pass Pass of G-buffer

Tags {
    
     "RenderType"="Opaque" }
LOD 100

Pass
{
    
    
    Tags {
    
    "LightMode" = "Deferred"}

    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag

    #include "UnityCG.cginc"
    #include "UnityLightingCommon.cginc"
    
    sampler2D _MainTex;
    fixed4 _MainTex_ST;
    fixed4 _Diffuse;
    fixed4 _Specular;
    fixed _Gloss;

    struct v2f
    {
    
    
        float2 uv : TEXCOORD0;
        float4 vertex : SV_POSITION;
        float3 worldNormal : TEXCOORD1;
        float3 worldPos : TEXCOORD2;
    };

    // 渲染G缓冲输出的四张图片
    struct deferredOutput
    {
    
    
        float4 gBuffer0 : SV_TARGET0;
        float4 gBuffer1 : SV_TARGET1;
        float4 gBuffer2 : SV_TARGET2;
        float4 gBuffer3 : SV_TARGET3;
    };

    v2f vert (appdata_base v)
    {
    
    
        v2f o;
        o.vertex = UnityObjectToClipPos(v.vertex);
        o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
        o.worldNormal = UnityObjectToWorldNormal(v.normal);
        o.worldPos = mul(unity_ObjectToWorld, v.vertex);
        return o;
    }

    deferredOutput frag (v2f i)
    {
    
    
        deferredOutput o;
        fixed3 color = tex2D(_MainTex, i.uv).rgb * _Diffuse.rgb;
        o.gBuffer0.rgb = color;
        o.gBuffer0.a = 1;
        o.gBuffer1.rgb = _Specular.rgb;
        o.gBuffer1.a = _Gloss / 32.0;
        o.gBuffer2 = float4(normalize(i.worldNormal * .5 + .5), 1);
        
        // 在非HDR模式下(即LDR),需对纹理颜色进行编码转换
		#if !defined(UNITY_HDR_ON)
			color.rgb = exp2(-color.rgb);
		#endif
		
        o.gBuffer3 = fixed4(color, 1);
        return o;
    }

    ENDCG
}

Postprocessing available to override Unity's default lighting model

SubShader
{
    
    
    Pass
    {
    
    
        ZWrite Off
        Blend [_SrcBlend] [_DstBlend]
        // Blend One One

        CGPROGRAM
        // 目标在Shader Mode 3.0以上
        #pragma target 3.0
        #pragma vertex vert
        #pragma fragment frag
        // 只有使用了这样的指令,才可以在相关的pass中得到特定信息。
        #pragma multi_compile_lightpass
        // 排除不支持MRT的硬件
        #pragma exclude_renderers norm
        // 关联Unity宏
        #pragma multi_compile __ UNITY_HDR_ON

        #include "UnityCG.cginc"
        // 使用的函数库
        #include "UnityDeferredLibrary.cginc"
        #include "UnityGBuffer.cginc"

        sampler2D _CameraGBufferTexture0;
        sampler2D _CameraGBufferTexture1;
        sampler2D _CameraGBufferTexture2;

        unity_v2f_deferred vert (appdata_base v)
        {
    
    
            unity_v2f_deferred o;
            o.pos = UnityObjectToClipPos(v.vertex);
            // 计算机屏幕位置
            o.uv = ComputeScreenPos(o.pos);
            o.ray = UnityObjectToViewPos(v.vertex) * float3(-1, -1, -1);
            // _LightAsQuad 当再处理四边形(直射光)时返回1,否则返回0
            o.ray = lerp(o.ray, v.normal, _LightAsQuad);
            return o;
        }

        
        #ifdef	UNITY_HDR_ON
        half4
        #else
        fixed4
        #endif
        frag (unity_v2f_deferred i) : SV_Target
        {
    
    
            float3 worldPos;
            float2 uv;
            half3 lightDir;
            // 衰减
            float atten;
            // 阴影消失距离
            float fadeDist;
            // 常用照明数据计算(方向、衰减等)
            UnityDeferredCalculateLightParams (i, worldPos, uv, lightDir, atten, fadeDist);

            // 解析灯光颜色
            fixed3 lightColor = _LightColor.rgb * atten;

            // 解析G缓冲区
            half4 gbuffer0 = tex2D(_CameraGBufferTexture0, uv);
            half4 gbuffer1 = tex2D(_CameraGBufferTexture1, uv);
            half4 gbuffer2 = tex2D(_CameraGBufferTexture2, uv);
            
            // 漫反射颜色
            half3 diffuseColor = gbuffer0.rgb;
            // 高光颜色
            half3 specularColor = gbuffer1.rgb;
            // 高光系数
            fixed3 gloss = gbuffer1.a * 32;

            float3 worldNormal = normalize(gbuffer2.xyz * 2 - 1);
            // 摄像机的世界空间位置。 = normalize(_WorldSpaceCameraPos - worldPos);
            fixed3 viewDir = normalize(UnityWorldSpaceViewDir(worldPos));
            fixed3 halfDir = normalize(lightDir + viewDir);

            // 漫反射
            half3 diffuse = lightColor * diffuseColor * saturate(dot(worldNormal, lightDir));
            // 高光
            half3 specular = lightColor * specularColor * pow(saturate(dot(worldNormal, halfDir)), gloss);

            half4 color = float4(diffuse + specular, 1);

            #ifdef UNITY_HDR_ON
            return color;
            #else 
            return exp2(-color);
            #endif
        }
        ENDCG
    }
        
    //LDR Blend DstColor Zero    HDR : Blend One One
    //转码pass,主要是对于LDR转码
    // 针对HDR和LDR转码
    Pass
    {
    
    
        // 任何情况下都会渲染
        ZTest Always
        Cull Off
        ZWrite Off
        Stencil
        {
    
    
            // _StencilNonBackground:unity提供的天空盒遮蔽模板
            ref[_StencilNonBackground]
            readMask[_StencilNonBackground]
            
            CompBack equal
            CompFront equal
        }

        CGPROGRAM
        // 目标在Shader Mode 3.0以上
        #pragma target 3.0
        #pragma vertex vert
        #pragma fragment frag
        // 排除不支持MRT的硬件
        #pragma exclude_renderers nomrt

        #include "UnityCG.cginc"

        sampler2D _LightBuffer;

        struct v2f
        {
    
    
            float4 vertex : SV_POSITION;
            float2 texcoord : TEXCOORD0;
        };

        v2f vert (float4 vertex: POSITION, float2 texcoord: TEXCOORD0)
        {
    
    
            v2f o;
            o.vertex = UnityObjectToClipPos(vertex);
            o.texcoord = texcoord.xy;
            // 返回将当前眼睛的比例和偏移应用于UV中的纹理坐标的结果。这仅在定义unity_Single_PASS_STEREO时发生,否则纹理坐标将原封不动地返回。
            #ifdef UNITY_SINGLE_PASS_STEREO
            o.texcoord = TransformStereoScreenSpaceTex(o.texcoord, 1.0);
            #endif
            return o;
        }

        
        fixed4 frag (v2f i) : SV_Target
        {
    
    
            return -log2(tex2D(_LightBuffer,i.texcoord));
        }
        ENDCG
    }
}

5. Light attenuation

  1. Unity uses a _LightTexture0texture called internally to calculate light falloff. We usually only care about _LightTexture0the texture color values ​​on the diagonal, which indicate the falloff values ​​for points at different locations in light space.
  2. For example, the (0,0) point indicates the attenuation value of the point that coincides with the light source position, while the (1,1) point indicates the attenuation value of the farthest point of interest in the light source space.
  3. Multiply the world coordinate by 1 to _LightMatrix0get the position in the light source space, use the square of the vertex distance in the light source space to sample the texture, and then use the macro UINITY_ATTEN_CHANNELto get the component of the attenuation value in the attenuation texture to get the final attenuation value .
// 将世界坐标转换到光源空间(0~1)
float3 lightCoord = mul(_LightMatrix0,float4(i.worldPos,1)).xyz;  
// 为减小开销,采样的坐标是正常坐标的平方
// 衰减纹理中衰减值所在分量
fixed atten = tex2D(_LightTexture0,dot(lightCoord,lightCoord).rr).UNITY_ATTEN_CHANNEL; 

Mathematical formula to calculate light attenuation

float distance = length(_WorldSpaceLightPos0.xyz - i.worldPosition.xyz);  
atten = 1.0/distance;  

6. Shadow Mapping

1. Principle
  1. Shadow Map: It will first place the camera position in a position that coincides with the light source, then the shadow area of ​​the light source in the scene is where the camera cannot see.
  2. Screenspace Shadow Map: Unity will first get the shadow map texture of the light source and the depth texture of the camera by calling the Pass with LightMode as ShadowCaster. Then, the shadow map of screen space is obtained from the shadow map texture of the light source and the depth texture of the camera. If the depth of the surface recorded in the camera's depth map is greater than the depth value converted to the shadow map texture, it means that the surface is visible but in the shadow of the light source. In this way, the shadow map contains all shadow areas in screen space. If we want an object to receive shadows from other objects, we just need to sample the shadow map in the Shader.

An object receives shadows from other objects, and it casts shadows on other objects are two processes.
3. If we want an object to receive shadows from other objects, we must sample the shadow map texture (including the shadow map of the screen space) in the Shader, and multiply the sampling result and the final lighting result to produce a shadow effect.
4. If we want an object to cast shadows to other objects, we must add the object to the calculation of the shadow map texture of the light source, so that other objects can get the relevant information of the object when sampling the shadow map texture. In Unity, this process is achieved by performing a Pass with LightMode of ShadowCaster for the object. Unity will also use this Pass to generate a depth texture for the camera if screen-space projection mapping is used.

2. Detailed explanation
  1. Use to SHADOW_COORDS(3)complete the encapsulation of data that accepts shadows.
  2. Use to work withTRANSFER_SHADOW(o); macros to process variables defined in macros.SHADOW_COORDSSHADOW_COORDS
  3. Use UNITY_LIGHT_ATTENUATION(shadow, i, i.worldPos);works with the macro SHADOW_COORDS, including light falloff and shadows.
    Note : If you use this cast shadow, you need to ensure that the object is rendered as a shadow cast (LightMode is ShadowCaster) Pass channel, otherwise there will be strange transparent effects.
  4. In the Pass channel of ForwardAdd , a multi-shadow compilation comment needs to be addedpragma #multi_compile_fwdadd_fullshadows

AlphaTest (Clip Transparent)

  1. need to be setTags { "RenderType"="TransparentCutOut" "Queue"="AlphaTest" "IgnoreProjector" = "True"}

AlphaTes (crop transparent)

  1. It needs to be LightMode=ForwardBaseset in the Pass channel ZWrite offofBlend SrcAlpha OneMinusSrcAlpha
  2. Normally transparent objects should be set Queue=Transparent, but this will also mean that objects can no longer accept shadows. The maximum render queue for objects to receive shadows is AlphaTest
2. Example:
  1. Cast shadows (requires an additional pass)
Pass 
{
    
    
    Name "ShadowCaster"
    Tags {
    
     "LightMode" = "ShadowCaster" }

    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag
    #pragma target 2.0
    // 阴影的对应宏。只有使用了这样的指令,才可以在相关的pass中得到阴影变量。
    #pragma multi_compile_shadowcaster
    // 允许大多数着色器的实例化阴影过程,在任何一个可以投射阴影的地方投射阴影。
    #pragma multi_compile_instancing 
    #include "UnityCG.cginc"

    struct v2f {
    
    
        // 声明阴影投射过程输出所需的所有数据(任何阴影方向/深度/距离根据需要),以及剪辑空间位置。
        V2F_SHADOW_CASTER;
        // 
        UNITY_VERTEX_OUTPUT_STEREO
    };

    v2f vert(appdata_base v)
    {
    
    
        v2f o;
        UNITY_SETUP_INSTANCE_ID(v);
        UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
        // 顶点着色器部分,支持法线偏移阴影。要求位置和法线出现在顶点输入中。
        TRANSFER_SHADOW_CASTER_NORMALOFFSET(o)
        return o;
    }

    float4 frag(v2f i) : SV_Target
    {
    
    
        SHADOW_CASTER_FRAGMENT(i)
    }
    ENDCG
}
  1. Receive shadows (modify the original ForwardBase and ForwardAdd channels)
Pass
{
    
    
    Tags {
    
     "LightMode"="ForwardBase" }
    CGPROGRAM
    // 只有使用了这样的指令,才可以在相关的pass中得到其他光源的光照变量,例如光照衰减值等。
    #pragma multi_compile_fwdbase
    #pragma vertex vert
    #pragma fragment frag

    #include "AutoLight.cginc"
    #include "UnityCG.cginc"
    #include "UnityLightingCommon.cginc"
    
    fixed4 _Diffuse;
    fixed4 _Specular;
    fixed _Gloss;

    struct v2f
    {
    
    
        float4 pos : SV_POSITION;
        float3 worldNormal : TEXCOORD0;
        float3 worldPos : TEXCOORD1;
        float3 vertexLight : TEXCOORD2;
        // 接受阴影的数据封装。(3)指的是TEXCOORD(3)
        SHADOW_COORDS(3)
    };

    v2f vert (appdata_base v)
    {
    
    
        v2f o;
        o.pos = UnityObjectToClipPos(v.vertex);
        o.worldNormal = UnityObjectToWorldNormal(v.normal);
        o.worldPos = mul(unity_ObjectToWorld, v.vertex);

        #ifdef LIGHTMAP_OFF
            float3 shLight = ShadeSH9(float4(v.normal, 1.0));
            o.vertexLight = shLight;
            #ifdef VERTEXLIGHT_ON
                fixed3 vertexLight = Shade4PointLights(
                    unity_4LightPosX0, unity_4LightPosY0, unity_4LightPosZ0, 
                    unity_LightColor[0].rgb, unity_LightColor[1].rgb, unity_LightColor[2].rgb, unity_LightColor[3].rgb,
                    unity_4LightAtten0, o.worldPos, o.worldNormal
                    );
                o.vertexLight += vertexLight;
            #endif
        #else
            o.vertexLight = UNITY_LIGHTMODEL_AMBIENT.xyz;
        #endif
        // 与宏协SHADOW_COORDS同工作,对宏协SHADOW_COORDS中定义的变量进行处理。
        TRANSFER_SHADOW(o);

        return o;
    }

    fixed4 frag (v2f i) : SV_Target
    {
    
    
        fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
        fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * saturate(dot(i.worldNormal, worldLightDir));
        fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
        fixed3 halfDir = normalize(worldLightDir + viewDir);
        fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(saturate(dot(i.worldNormal, halfDir)), _Gloss);

        // 与宏协SHADOW_COORDS同工作,得到阴影值
        // fixed shadow = SHADOW_ATTENUATION(i);
        // 宏协SHADOW_COORDS同工作,包含了光照衰减以及阴影。但由于ForwardBase逐像素光源一般是方向光,衰减固定为1,因此此时衰减无意义,与上式相同。
        UNITY_LIGHT_ATTENUATION(shadow, i, i.worldPos);

        return fixed4((diffuse + specular) * shadow + i.vertexLight, 1);
    }
    ENDCG
}

Pass
{
    
    
    Tags {
    
     "LightMode"="ForwardAdd" }
    Blend One One
    CGPROGRAM
    // 多阴影编译注释
    #pragma multi_compile_fwdadd_fullshadows
    #pragma multi_compile_fwdadd
    #pragma vertex vert
    #pragma fragment frag

    #include "UnityCG.cginc"
    #include "UnityLightingCommon.cginc"
    #include "AutoLight.cginc"

    fixed4 _Diffuse;
    fixed4 _Specular;
    fixed _Gloss;

    struct v2f
    {
    
    
        float4 pos : SV_POSITION;
        float3 worldNormal : TEXCOORD0;
        float3 worldPos : TEXCOORD1;
        // 已包含:SHADOW_COORDS(3)
        LIGHTING_COORDS(2, 3)
    };

    v2f vert (appdata_base v)
    {
    
    
        v2f o;
        o.pos = UnityObjectToClipPos(v.vertex);
        o.worldNormal = UnityObjectToWorldNormal(v.normal);
        o.worldPos = mul(unity_ObjectToWorld, v.vertex);
        // 已包含:TRANSFER_SHADOW(o);
        TRANSFER_VERTEX_TO_FRAGMENT(o);
        return o;                
    }

    fixed4 frag (v2f i) : SV_Target
    {
    
    
        fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
        fixed3 diffuse = _LightColor0.rgb * _Diffuse.rgb * saturate(dot(i.worldNormal, worldLightDir));
        fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
        fixed3 halfDir = normalize(worldLightDir + viewDir);
        fixed3 specular = _LightColor0.rgb * _Specular.rgb * pow(saturate(dot(i.worldNormal, halfDir)), _Gloss);
        
        // fixed atten = LIGHT_ATTENUATION(i);
        // 计算包含了光照衰减以及阴影
        UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
        return fixed4((diffuse + specular) * atten, 1);
    }
    ENDCG
}
  1. AlphaTestcast shadows (requires an additional pass)
Pass 
{
    
    
	Tags {
    
     "LightMode" = "ShadowCaster" }

	CGPROGRAM
	#pragma vertex vert
	#pragma fragment frag
	#pragma target 2.0
	#pragma multi_compile_shadowcaster
	#pragma multi_compile_instancing 
	#include "UnityCG.cginc"

	uniform sampler2D _MainTex;
	uniform float4 _MainTex_ST;
	uniform fixed _Cutoff;
	uniform fixed4 _Diffuse;

	struct v2f {
    
    
		V2F_SHADOW_CASTER;
		float2  uv : TEXCOORD1;
		UNITY_VERTEX_OUTPUT_STEREO
	};

	v2f vert( appdata_base v )
	{
    
    
		v2f o;
		UNITY_SETUP_INSTANCE_ID(v);
		UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(o);
		TRANSFER_SHADOW_CASTER_NORMALOFFSET(o)
		o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
		return o;
	}

	float4 frag( v2f i ) : SV_Target
	{
    
    
		// clip 的区域即为裁剪掉的区域
		fixed4 texcol = tex2D( _MainTex, i.uv );
		clip( texcol.a * _Diffuse.a - _Cutoff );

		SHADOW_CASTER_FRAGMENT(i)
	}
	ENDCG
}

2. SurfaceShader

1. Basic Components

1. Compile comments

#pragma surface surfaceFunction lightModel [optionalparams]

  1. Surface Shader is the same as other parts of CG, and the code is also written between CGPROGRAM and ENDCG. But the difference is that it must be written inside SubShader, not inside Pass. The Surface Shader itself will automatically generate the required Passes. As can be seen from the above compilation format, surfaceFunction and lightModel must be specified.
  2. surfaceFunction is usually a function named surf (the function name can be arbitrary), and its function format is fixed:
void surf (Input IN, inout SurfaceOutput o)
void surf (Input IN, inout SurfaceOutputStandard o)
void surf (Input IN, inout SurfaceOutputStandardSpecular o)
  1. lightModel must also be specified. Since Unity has some built-in lighting functions - Lambert (diffuse) and Blinn-Phong (specular), the built-in Lambert model is used by default here. Of course we can also customize.
    Note : The custom function needs to start with Lighting, followed by the function name. For example, the name of LightingOcean is Ocean.
  2. optionalparams contains many available instruction types, including turning on and off some states, setting the generated Pass type, specifying optional functions, etc. In addition to the above surfaceFuntion and lightModel, we can also customize two functions: vertex:VertexFunction(vertex) and finalcolor:ColorFunction(final color). That is to say, Surface Shader allows us to customize four functions.
  3. Process: vertex -> surf -> lightModel -> finalcolor

Optional parameter (operator)
Transparency Blend and Transparency Test

  • alpha or alpha:auto chooses faded transparency for simple lighting (equivalent to alpha:fade), and premultiplied transparency for physically based lighting (equivalent to alpha:premul)
  • alpha:blend turns on transparency blending
  • alpha:fade enables traditional gradient transparency
  • alpha:premul turns on premultiplied a transparency
  • alphatest:VariableName controls the transparency mixing and transparency test according to the variable of VariableName. VariableName is a float type variable, which eliminates the fragments that do not meet the conditions. At this time, addshadow is often used to generate the correct shadow casting Pass
  • keepalpha The default opaque surface shader writes 1 to the A channel, regardless of the alpha output value and the return value of the lighting function
  • decal:add uses additive blending on objects on other surfaces
  • decal:blend uses alpha blending for objects on other surfaces

custom modification function

  • vertex:VertexFunction Vertex modification function, used to modify and calculate vertex position, information, etc.
  • finalcolor:ColorFunction final color modification function
  • finalgbuffer:ColorFunction custom delay path for changing gbuffer
  • finalprepass:ColorFunction custom preprocessing path

shadow

  • addshadow generates a shadow casting pass, which generates correct shadows for some objects that use vertex animation and transparency testing
  • fullforwardshadows supports the shadows of all light source types in the forward rendering path. By default, the shader only supports the shadows of the most important parallel lights. Adding this parameter can support the shadow effects of point lights or spot lights.
  • tessellate: TessFunction uses DX11 GPU tessellation

Control code generation (the surface shader handles all lighting, shadows, and lighting baking by default, and can be manually adjusted to skip some unnecessary loads to improve performance)

  • exclude_path:deferred, exclude_path:forward, exclude_path:prepass do not generate code for a rendering path
  • noshadow disable shadow
  • noambient does not apply any ambient light and light probes
  • novertexlights do not apply any per-vertex lighting and light probes in the forward pass
  • nolightmap does not apply any lighting bakes
  • nodynlightmap does not apply realtime GI
  • nodirlightmap does not apply directional lightmaps
  • nofog does not apply any fog effects
  • nometa generates the meta pass (that's used by lightmapping & dynamic global illumination to extract surface information)
  • noforwardadd does not apply all the additive passes in forward rendering, so that the shader only supports one important parallel light, and other lights use per-vertex/SH light sources to calculate the lighting influence, making the shader more streamlined
  • nolppv Do not apply Light Probe Proxy Volume (LPPV)
  • noshadowmask does not apply Shadowmask

other

  • softvegetation The shader is only rendered when Soft Vegetation is on
  • interpolateview calculates the view direction and interpolates in the vertex instead of the fragment shader, and needs to use an extra texture interpolator to improve the rendering speed
  • halfasview Pass half-direction vector into the lighting function instead of view-direction. Half-direction will be computed and normalized per vertex. This is faster, but not entirely correct.
  • dualforward uses dual lightmaps in forward rendering
  • dithercrossfade enables surface shaders to support dithering effects
2. Pluggable calculation functions
  1. The most important calculation function
void surf(Input IN, inout SurfaceOutput o)
void surf(Input IN, inout SurfaceOutputStandard o)
void surf(Input IN, inout SurfaceOutputStandardSpecular o)
  1. Vertex Modification
void vert(inout appdata_full v)
void vert(inout appdata_full v, out Input o)
  1. Lighting
// unity老版本(新版本兼容)不包含GI的
half4 Lighting<Name> (SurfaceOutput s, half3 lightDir, half atten)
half4 Lighting<Name> (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten)
// unity新版本(包含GI,要自定义GI函数)
half4 Lighting<Name> (SurfaceOutput s, UnityGI gi)
half4 Lighting<Name> (SurfaceOutput s, half3 viewDir, UnityGI gi)
  1. Custom GI functions
half4 Lighting<Name>_GI(SurfaceOutput s, UnityGIInput data, inout UnityGI gi);
  1. deferred rendering
// 遗留的延迟渲染
half4 Lighting<Name>_PrePass(SurfaceOutput s, half4 light)
// 延迟渲染
half4 Lighting<Name>_Deferred(SurfaceOutput s, UnityGI gi, out half4 outDiffuseOcclusion, out half4 outSpecSmoothness, out half4 outNormal)
  1. final color modification
void final(Input IN, SurfaceOutput o, inout fixed4 color)
void final(Input IN, SurfaceOutputStandard o, inout fixed4 color)
void final(Input IN, SurfaceOutputStandardSpecular o, inout fixed4 color)

2. Structure

Structure can use two: struct Input and SurfaceOutput . The Input structure allows us to customize it. See the table below. These variables are only calculated when they are actually used. Adding the letters uv before a texture variable means extracting its uv value, such as uv_MainTex.

1. Input
variable describe
float3 viewDir view direction. Can be used to calculate edge lighting, etc.
float4 with COLOR semantic Interpolated color for each vertex
float4 screenPos position in screen space. For the reflection effect, the position information in screen space needs to be included.
float3 worldPos position in world space.
float3 worldRefl If the shader is not assigned o.Normal, it will contain the world reflection vector.
float3 worldNormal If the shader is not assigned o.Normal, it will contain the world normal vector.
float3 worldRefl; INTERNAL_DATA Once o.Normal is assigned an arbitrary position, it returns to zero. Use WorldReflectionVector(IN, o.Normal) to re-transform to the reflected vector in the world space coordinate system
float3 worldNormal; INTERNAL_DATA Once o.Normal is assigned an arbitrary position, it returns to zero. Use WorldNormalVector(IN, o.Normal) to re-transform to the vector of the normal in the world space coordinate system
2. SurfaceOutput

Including: SurfaceOutput, SurfaceOutputStandard and SurfaceOutputStandardSpecular.

We can also customize the variables in this structure. Customization requires at least 4 member variables: Albedo, Normal, Emission and Alpha. If one is missing, an error will be reported. The most difficult thing to understand about it is the specific meaning of each variable and how it works (the effect on pixel color). :

struct SurfaceOutput
{
    
    
    fixed3 Albedo;  // diffuse color,反射率,纹理颜色
    fixed3 Normal;  // tangent space normal, if written,法线
    fixed3 Emission;  //自发光
    half Specular;  // specular power in 0..1 range 镜面反射度
    fixed Gloss;    // specular intensity 光泽度    
    fixed Alpha;    // alpha for transparencies 透明度
};
struct SurfaceOutputStandard
{
    
    
    fixed3 Albedo;      // base (diffuse or specular) color
    fixed3 Normal;      // tangent space normal, if written
    half3 Emission;
    half Metallic;      // 0=non-metal, 1=metal
    half Smoothness;    // 0=rough, 1=smooth
    half Occlusion;     // occlusion (default 1)
    fixed Alpha;        // alpha for transparencies
};
struct SurfaceOutputStandardSpecular
{
    
    
    fixed3 Albedo;      // diffuse color
    fixed3 Specular;    // specular color
    fixed3 Normal;      // tangent space normal, if written
    half3 Emission;
    half Smoothness;    // 0=rough, 1=smooth
    half Occlusion;     // occlusion (default 1)
    fixed Alpha;        // alpha for transparencies
};
  • Albedo: The reflectivity of a light source as we usually understand it. It is superimposed on the final color by multiplying some variables (such as vertex lights) when calculating the color overlay in the Fragment Shader. (diffuse color)
  • Normal: That is, its corresponding normal direction. Any computation that is affected by the normal will be affected.
  • Emission: Self-illumination. Before the final output of the Fragment (before calling the final function, if defined), use the following statement to perform a simple color overlay:c.rgb += o.Emission;
  • Specular (Metallic): The coefficient of the exponential part in the specular reflection. Affects the calculation of some specular reflections. According to the current understanding, it is used in the lighting model (if you do not use it in functions such as lighting functions - including Unity's built-in lighting functions, this variable is useless even if it is set). Sometimes, you only set it in the surf function, but it also affects the final result. This is because, you may be using Unity's built-in lighting model, such as BlinnPhong, which uses the following statement to calculate the intensity of the specular reflection (in Lighting.cginc):float spec = pow (nh, s.Specular*128.0) * s.Gloss;
  • Gloss (Smoothness): The intensity factor in specular reflections. Similar to Specular above, it is generally used in lighting models.
  • Alpha: The transparency channel commonly understood. In the Fragment Shader, the following methods are used directly to assign values ​​(if the transparent channel is enabled):c.a = o.Alpha;
3. Get model space coordinates

Model space coordinates can be obtained in the vertex function.

#pragma surface surf Standard fullforwardshadows vertex:vert

struct Input
{
    
    
    float2 uv_MainTex;
    float3 worldPos;
    float2 uv_OtherTex;
    float4 modelPos;
};

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

// 获取模型空间坐标
void vert(inout appdata_base v, out Input o)
{
    
    
    UNITY_INITIALIZE_OUTPUT(Input, o);
    o.modelPos = v.vertex;
    // o.modelPos = mul(unity_ObjectToWorld, v.vertex);
}

Model space coordinates can also be obtained using functions.

float3 modelPos = mul(unity_WorldToObject, float4(IN.worldPos, 1));

3. Examples

1. Use the old BlinnPhong lighting model
  1. Modify the lighting model
  2. Modify the input inout to SurfaceOutput
  3. Replace and use the corresponding light factor
CGPROGRAM
// 修改光照模型
#pragma surface surf BlinnPhong fullforwardshadows
#pragma target 3.0

sampler2D _MainTex;

struct Input
{
    
    
    float2 uv_MainTex;
};

// 老版使用的光照系数变量名为_Gloss、_Specular,因此需要统修改
half _Gloss;
half _Specular;
fixed4 _Color;

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

// 老版使用的输入为SurfaceOutput,因此需要统修改
void surf (Input IN, inout SurfaceOutput o)
{
    
    
    fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    o.Albedo = c.rgb;
    o.Specular = _Specular;
    o.Gloss = _Gloss;
    o.Alpha = c.a;
}
ENDCG
2. Cartoon rendering

We can also customize the function to override the lighting mode

// nolightmap: 禁用此着色器中的所有光照贴图(烘焙)支持。 
#pragma surface surf Toon fullforwardshadows nolightmap finalcolor:final

...

half4 LightingToon (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten)
{
    
    
    float difLight = dot(lightDir, s.Normal) * 0.5 + 0.5;
    difLight = smoothstep(0, 1, difLight);
    float toon = floor(difLight * _Steps) / _Steps;
    difLight = lerp(difLight, toon, _ToonEffect);
    fixed3 diffuse = _LightColor0 * s.Albedo * difLight;
    return half4(diffuse, 1);
}
3. Pure texture color

Diffuse will be affected by ambient light and will not show the original color of the map anyway, so don't use diffuse and set it to 0

half4 LightingToon (SurfaceOutput s, half3 lightDir, half3 viewDir, half atten)
{
    
    
    return half4(0, 0, 0, 0);
}

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutput o)
{
    
    
    fixed4 color = tex2D (_MainTex, IN.uv_MainTex) * _Color;

    o.Albedo = fixed4(0, 0, 0, 0);
    o.Emission = color;
    o.Alpha = color.a;
}
4. Normal map sampling
  1. Get normal map uv in Input
  2. get the value of the normal map

(example below)

5. Final Color Control
  1. Add viewDir to the Input structure to get the viewing angle coordinates.

(example below)

6. Edge Light
  1. Set the final color control functionfinalcolor:final
  2. Assign the calculated result to Emission (self-illumination)
CGPROGRAM
#pragma surface surf Standard fullforwardshadows finalcolor:final

#pragma target 3.0

sampler2D _MainTex;
sampler2D _BumpMap;

struct Input
{
    
    
    float2 uv_MainTex;
    // 提取_BumpMap的uv值
    float2 uv_BumpMap;
    // 获取视角坐标
    float3 viewDir;
};

half _Glossiness;
half _Metallic;
fixed4 _Color;
float _BumpScale;
fixed4 _ColorTint;
float4 _RimColor;
float _RimPower;

// #pragma instancing_options assumeuniformscaling
UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
    fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    o.Albedo = c.rgb;
    o.Metallic = _Metallic;
    o.Smoothness = _Glossiness;
    o.Alpha = c.a;
    // 获得法线贴图的值
    fixed3 normal = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
    // xy轴上增大倾斜倍数。直接相乘只是增加了长度,对法线方向并没有影响。
    normal.xy *= _BumpScale;
    o.Normal = normal;
    // 计算边缘光
    half rim = 1.0 - saturate(dot(normalize(IN.viewDir), o.Normal));
    o.Emission = _RimColor.rgb * pow(rim, _RimPower);
}

// 最终颜色控制函数
void final(Input IN, SurfaceOutputStandard o, inout fixed4 color)
{
    
    
    color *= _ColorTint;
}
ENDCG
7. Add a stroke
  1. Create a new Pass channel, and the channel writing method is the same as the vertex shader to add strokes.
8. X-ray
  1. Create a new Pass channel, and write the channel in the same way as the vertex shader implements X-ray.
9. Flow effect
  1. Determine the uv position according to the time _Time, so as to achieve the flow effect.
  2. Use self-illumination as a fluid effect carrier.
CGPROGRAM
#pragma surface surf StandardSpecular fullforwardshadows
#pragma target 3.0

sampler2D _MainTex;
sampler2D _Normal;
sampler2D _Mask;
sampler2D _Specular;
sampler2D _Fire;

struct Input
{
    
    
    float2 uv_MainTex;
};

half _Smoothness;
half _FireIntensity;
half2 _FireSpeed;
fixed4 _Color;

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandardSpecular o)
{
    
    
    // 漫反射
    o.Albedo = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    // 法线
    o.Normal = UnpackNormal(tex2D(_Normal, IN.uv_MainTex));
    float2 uv = IN.uv_MainTex + _Time.x * _FireSpeed;
    // 使用自发光作为流动效果载体。_Mask作为蒙版遮罩,_Fire作为流动体
    o.Emission = (tex2D(_Mask, IN.uv_MainTex) * tex2D(_Fire, uv) * (_FireIntensity + _SinTime.w)).rgb;
    // 将图片颜色的作为高光颜色(非必要)
    o.Specular = tex2D(_Specular, IN.uv_MainTex).rgb;
    // 高光反射中的强度系数(非必要)
    o.Smoothness = _Smoothness;
    o.Alpha = 1;
}
ENDCG
10. UV Distortion
  1. Determine the uv position according to the time _Time, so as to achieve the flow effect.
  2. The value of the normal map is added as an offset to the acquisition of texture coordinates, causing the texture coordinates to be offset compared to the expected coordinates, thereby achieving the UV distortion effect.
CGPROGRAM
#pragma surface surf Standard fullforwardshadows
#pragma target 3.0

sampler2D _MainTex;
sampler2D _DistortTexture;

struct Input
{
    
    
    float2 uv_MainTex;
};

half _Glossiness;
half _Metallic;
half _Speed;
half _UVDisIntensity;
fixed4 _Color;

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
    float2 uv1 = IN.uv_MainTex + _Time.y * _Speed * float2(1, 1);
    float2 uv2 = IN.uv_MainTex + _Time.y * _Speed * float2(-1, -1);
    // UnpackScaleNormal:对法线贴图使用正确的解码,并缩放法线。与下式作用相同。
    // float2 distortTexture = UnpackNormal(tex2D(_DistortTexture, IN.uv_MainTex));
    // distortTexture.xy *= _UVDisIntensity;
    float2 distortTexture = UnpackScaleNormal(tex2D(_DistortTexture, IN.uv_MainTex), _UVDisIntensity);
    // 获取偏移位置的颜色,形成类似UV扭曲的效果
    float4 mainTex1 = tex2D(_MainTex, (uv1 + distortTexture).xy);
    float4 mainTex2 = tex2D(_MainTex, (uv2 + distortTexture).xy);
    float4 color = _Color * mainTex1 * mainTex2;

    // 漫发射
    o.Albedo = color;
    // 自发光
    o.Emission = color;
    // 高光反射中的指数部分
    o.Metallic = _Metallic;
    o.Smoothness = _Glossiness;
    o.Alpha = 1;
}
ENDCG
11. Glow
  1. Use two different sets of normal textures, two different sets of images.
  2. Use the R color channel of the mask as the mask value, and its value (0~1) corresponds to the two textures, the background image and the top image
    insert image description here
CGPROGRAM
#pragma surface surf Standard fullforwardshadows
#pragma target 3.0

sampler2D _MainTex;
sampler2D _TopTex;
sampler2D _Nomral;
sampler2D _BurnNormal;
sampler2D _Mask;
half _BurnRange;
// 使用纯色时使用
// fixed4 _BurnColor;

struct Input
{
    
    
    float2 uv_MainTex;
};

half _Glossiness;
half _Metallic;
fixed4 _Color;


UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
    // 两套不同的法线纹理
    float3 normal1 = UnpackNormal(tex2D(_Nomral, IN.uv_MainTex));
    float3 normal2 = UnpackNormal(tex2D(_BurnNormal, IN.uv_MainTex));
    // 将蒙版的R颜色通道作为蒙版值,其值(0~1)正对应着两张纹理、背景图和顶面图
    fixed3 maskColor = tex2D(_Mask, IN.uv_MainTex);
    float maskR = saturate(_BurnRange + maskColor.r);

    // 拿到背景图像
    fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    fixed4 t = tex2D (_TopTex, IN.uv_MainTex) * _Color;
    // 使用纯色作为顶面时使用
    // fixed4 diffuse = lerp(c, _BurnColor, maskR);
    fixed4 diffuse = lerp(c, t, maskR);

    o.Normal = lerp(normal1, normal2, maskR);
    o.Albedo = diffuse.rgb;

    o.Metallic = _Metallic;
    o.Smoothness = _Glossiness;
    o.Alpha = 1;
}
ENDCG
12. Normal expansion change
  1. Write vertex shader code, add vertex:vertFun. The purpose is to modify the position of the vertices.
  2. Use the function UNITY_INITIALIZE_OUTPUT(type,name)to initialize each variable in the given structure to 0 (data type, data name) to prevent DX from reporting errors.
  3. The model is proportionally enlarged along the normal line, and different magnification ratios are determined according to the time and y-axis position to form a ripple effect
    insert image description here
CGPROGRAM
#pragma surface surf Standard fullforwardshadows vertex:vertFun
#pragma target 3.0

sampler2D _MainTex;
half _ExtrusionFrency;
half _ExtrusionSwing;

struct Input
{
    
    
    float2 uv_MainTex;
};

half _Glossiness;
half _Metallic;
fixed4 _Color;

void vertFun (inout appdata_full v, out Input o)
{
    
    
    // 用于把所给结构体里的各个变量初始化为0 (数据类型,数据名)
    // 防止DX报错
    UNITY_INITIALIZE_OUTPUT(Input, o);
    float3 normal = v.normal.xyz;
    float3 vertexPos = v.vertex.xyz;
    // 沿法线等比放大模型,根据时间和y轴位置决定不同放大比,形成波纹效果
    v.vertex.xyz += normal * max(sin((vertexPos.y + _Time.x) * _ExtrusionFrency) * _ExtrusionSwing, 0);
}

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
    fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    o.Albedo = c.rgb;
    o.Metallic = _Metallic;
    o.Smoothness = _Glossiness;
    o.Alpha = c.a;
}
ENDCG
13. Ablation effect
  1. The R channel of the ablation texture is parsed as the ablation basis.
  2. Using the clip function, the part less than 0 will be cleared.
  3. Define a temp value and use the ablation basis to ensure that the part closer to the ablation edge has a lower temp.
  4. The texture color is obtained by the temp value, and the part closer to the ablation edge is closer to one side of the picture.
  5. Applies an interpolated transition to the ablation color. The closer to the edge, the closer the color is to the desired ablation color and applied to the emissive material.
    insert image description here
CGPROGRAM
#pragma surface surf Standard fullforwardshadows noshadow addshadow
#pragma target 3.0

sampler2D _MainTex;
sampler2D _Normal;
half _NormalScale;
sampler2D _DisolveTex;
half _Threshold;
sampler2D _BurnTex;
half _EdgeLength;
half _BurnInstensity;

struct Input
{
    
    
	float2 uv_MainTex;
	float2 uv_Normal;
	float2 uv_DisolveTex;
};

half _Glossiness;
half _Metallic;
fixed4 _Color;

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
	fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;
	// 法线
	o.Normal = UnpackScaleNormal(tex2D(_Normal, IN.uv_Normal) ,_NormalScale);
	// 漫反射
	o.Albedo = c.rgb;

	// 解析消融纹理的R通道作为消融依据
	float cutout = tex2D(_DisolveTex, IN.uv_DisolveTex).r;
	// 小于0的部分会被消融,也就意味着_Threshold越大消融程度越高
	clip(cutout - _Threshold);

	// 越靠近消融边缘的部分temp越低
	float temp = saturate((cutout - _Threshold) / _EdgeLength);
	// 通过temp值获取纹理颜色,越靠近近消融边缘的部分越靠近图片右侧
	fixed4 edgeColor = tex2D(_BurnTex, float2(1 - temp,1 - temp));
	// 为消融颜色应用一个过渡。越靠近边缘的部分,颜色越接近期待的消融颜色。
	fixed4 finalColor = _BurnInstensity * lerp(edgeColor, fixed4(0, 0, 0, 0), temp);
	
	o.Emission = finalColor.rgb;
	o.Metallic = _Metallic;
	o.Smoothness = _Glossiness;
	o.Alpha = c.a;
}
ENDCG
14. Regional Excess
  1. Get model space coordinates
  2. Records the unit amount above the starting transition height
  3. From the initial transition height to the height of the unit amount higher than 1, this interval is the transition interval
    insert image description here
CGPROGRAM
#pragma surface surf Standard fullforwardshadows vertex:vert
#pragma target 3.0

sampler2D _MainTex;
half _StartPoint;
fixed4 _Tint;
half _Dis;
sampler2D _OtherTex;

struct Input
{
    
    
    float2 uv_MainTex;
    float3 worldPos;
    float2 uv_OtherTex;
    float4 modelPos;
};

half _Glossiness;
half _Metallic;
fixed4 _Color;

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

// 获取模型空间坐标
void vert(inout appdata_base v, out Input o)
{
    
    
    UNITY_INITIALIZE_OUTPUT(Input, o);
    o.modelPos = v.vertex;
    // o.modelPos = mul(unity_ObjectToWorld, v.vertex);
}

void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
    // 记录比起始过渡高度高出的单位量(单位为_Dis)
    float temp =  saturate((IN.modelPos.y + _StartPoint) / _Dis);
    fixed4 c1 = tex2D (_MainTex, IN.uv_MainTex) * _Color;
    fixed4 c2 = tex2D(_OtherTex, IN.uv_OtherTex) * _Color;
    // 起始过渡高度到高出的单位量为1的高度,这段区间即为过渡区间
    fixed4 color = lerp(c2, c1, temp);

    o.Albedo = color.rgb;
    o.Metallic = _Metallic;
    o.Smoothness = _Glossiness;
    o.Alpha = c1.a;
}
ENDCG
15. AlphaTest
  1. Transparency testing using alphatest
  2. Turn off the original shadow and add and generate a shadow casting Pass (use the addshadow statement to complete)
// 透明度测试通道
Tags {
    
     "RenderType"="Transparent" "Queue"="AlphaTest"}
LOD 200
// 双面渲染
Cull off

CGPROGRAM
// 根据_Cutoff来控制透明度混合和透明度测试。关闭原本阴影并添加并生成一个阴影投射的Pass
#pragma surface surf Standard alphatest:_Cutoff noshadow addshadow
16. AlphaBlend
  1. Use alpha:blend to enable transparency blending
Tags {
    
     "RenderType"="Transparent" "Queue"= "Transparent"}
LOD 200
Cull off

CGPROGRAM
#pragma surface surf Standard alpha:blend noshadow
#pragma target 3.0
17. Snow effect

Note:

  1. The default normal is the normal normal direction perpendicular to the face, which is converted to the normal direction in tangent space once assigned at any position. (Even if you get the value first and then change the direction, the acquired value will become the normal direction of the tangent space)
  2. WorldNormalVectorThe normal direction that has been converted to tangent space can be changed back to the vector of the normal in the world space coordinate system.
  1. Calculate snow concentration (angle between normal and snow direction)
  2. Based on the concentration of snow, convert the normal to a value between the texture normal and the snow normal, making the effect softer
  3. Calculate the fused world normal vector
  4. After the fusion, the normal and the snow direction are calculated twice to make the fusion of the snow texture and the texture texture softer
CGPROGRAM
#pragma surface surf StandardSpecular fullforwardshadows
#pragma target 3.0

sampler2D _MainTex;
sampler2D _NormalTex;
sampler2D _SnowTex;
sampler2D _SnowNormal;

struct Input
{
    
    
    float2 uv_MainTex;
    float2 uv_NormalTex;
    float2 uv_SnowTex;
    float2 uv_SnowNormal;
    float3 worldNormal;
    // 该宏定义与WorldNormalVector联动
    INTERNAL_DATA
};

half _Glossiness;
half _Metallic;
fixed4 _Color;
float4 _SnowDir;
half _SnowAmount;

UNITY_INSTANCING_BUFFER_START(Props)
UNITY_INSTANCING_BUFFER_END(Props)

void surf (Input IN, inout SurfaceOutputStandardSpecular o)
{
    
    
    float3 normalTex = UnpackNormal(tex2D(_NormalTex, IN.uv_NormalTex));
    float3 snowNorTex = UnpackNormal(tex2D(_SnowNormal, IN.uv_SnowNormal));
    // 默认的法线是垂直于面的常规法线方向,一旦在任意位置赋值就会转换为切线空间的法线方向。(即便是先获取值再改变方向,获取到的值也会变为切线空间的法线方向)
    // 该转换可将已经被转换为切线空间的法线方向,重新变化为世界空间坐标系中法线的向量
    // 世界空间法线向量
    fixed3 wNormal = WorldNormalVector(IN, normalTex);
    // 计算雪浓度(法线与雪方向的夹角)
    float lerpVal = saturate(dot(wNormal, _SnowDir.xyz));
    // 以雪的浓度为依据,将法线转变为纹理法线与雪法线的中间某值,使得效果更加柔和
    fixed3 finalNormal = lerp(normalTex, snowNorTex, lerpVal * _SnowAmount);
    // 融合后的世界法线向量
    fixed3 fWNormal = WorldNormalVector(IN, finalNormal);
    // 融合后法线与雪方向二次运算,让雪纹理与贴图纹理融合更加柔和
    lerpVal = saturate(dot(fWNormal, _SnowDir.xyz));
    fixed4 c = lerp(tex2D(_MainTex, IN.uv_MainTex) * _Color, tex2D(_SnowTex, IN.uv_SnowTex), lerpVal * _SnowAmount);
    
    o.Normal = finalNormal;
    o.Albedo = c.rgb;
    // o.Metallic = _Metallic;
    // o.Smoothness = _Glossiness;
    o.Alpha = c.a;
}
ENDCG

3. Other content

1. GI

1. What is GI

insert image description here

2. Cubemap

1. The code creates the cubemap
  1. Check the Readable of the Cubemap in the inspector to make the Cubemap readable and writable
  2. 使用函数camera.RenderToCubema(Cubemap cubemap);,将对应位置的相机全景拍摄渲染为静态立方体贴图,赋值给入参Cubemap。
public class RenderCubeMap : ScriptableWizard
{
    
    
    public Transform renderPos;
    public Cubemap cubemap;

    [MenuItem("Tools/CreateCubemap")]
    static void CreatCubemap()
    {
    
    
        ScriptableWizard.DisplayWizard<RenderCubeMap>("Render Cube", "Create");
    }

    private void OnWizardCreate()
    {
    
    
        GameObject go = new GameObject("CubemapCam");
        Camera camera = go.AddComponent<Camera>();
        go.transform.position = renderPos.position;
        // 从该相机渲染为静态立方体贴图,赋值给入参Cubemap。
        camera.RenderToCubemap(cubemap);
        DestroyImmediate(go);
    }

    private void OnWizardUpdate()
    {
    
    
        helpString = "选择渲染位置并且确定需要设置的Cubemap";
        isValid = renderPos != null && cubemap != null;
    }
}
2. 烘焙生成立方体贴图

insert image description here

3. 反射

  1. 使用函数reflect(worldViewDir, worldNormal)获取到反射向量。
  2. 在shader中应用反射与立方体贴图,反射立方体贴图中的颜色。
    若使用法线仅可获得常规贴图一样的效果,而使用反射可得到如同鱼眼镜头的效果。
    insert image description here
CGPROGRAM
#pragma vertex vert
#pragma fragment frag

#include "UnityCG.cginc"
#include "UnityLightingCommon.cginc"

fixed4 _ReflectionColor;
samplerCUBE _CubeMap;
half _ReflectionAmount;
sampler2D _MainTex;
float4 _MainTex_ST;

struct v2f
{
    
    
    float2 uv : TEXCOORD0;
    float4 vertex : SV_POSITION;
    float3 worldPos : TEXCOORD1;
    float3 worldNormal : TEXCOORD2;
    float3 worldViewDir : TEXCOORD3;
    float3 worldRefl : TEXCOORD4;
};

v2f vert (appdata_base v)
{
    
    
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
	o.worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;
	o.worldNormal = UnityObjectToWorldNormal(v.normal);
	o.worldViewDir = UnityWorldSpaceViewDir(o.worldPos);
	o.worldRefl = reflect(-o.worldViewDir, o.worldNormal);
    UNITY_TRANSFER_FOG(o,o.vertex);
    return o;
}

fixed4 frag (v2f i) : SV_Target
{
    
    
    fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
    fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;
    fixed3 diffuse = tex2D(_MainTex, i.uv).rgb *_LightColor0.rgb * (saturate(dot(worldLightDir, i.worldNormal)) * 0.5 + 0.5);
    // 常规查询立方体纹理
    // fixed3 refection = texCUBE(_CubeMap, i.worldNormal).rgb * _ReflectionColor;
    // 根据反射法向查询立方体纹理,点随着与摄像机角度的而增大反射角,相比于常规的查询立方体纹理,能获得更大的视野面积,形成类似鱼眼镜头的效果
    fixed3 refection = texCUBE(_CubeMap, i.worldRefl).rgb * _ReflectionColor;
    fixed3 col = ambient + lerp(diffuse, refection, _ReflectionAmount);
    return fixed4(col, 1);
}
ENDCG

效果如图
insert image description here

2. 菲涅尔反射

insert image description here
由菲涅耳公式推导出的光的反射规律,F0为系数,v为视角方向,n为法线向量。

fixed4 frag (v2f i) : SV_Target
{
    
    
    fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));

    fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;
    fixed3 diffuse = tex2D(_MainTex, i.uv).rgb *_LightColor0.rgb * (saturate(dot(worldLightDir, i.worldNormal)) * 0.5 + 0.5);

    fixed3 refection = texCUBE(_CubeMap, i.worldRefl).rgb;
    // 菲涅尔反射
    fixed fresnel = _FresnelScale + (1 - _FresnelScale) * pow(1 - dot(normalize(i.worldViewDir), i.worldNormal), 5);

    fixed3 col = ambient + lerp(diffuse, refection, saturate(fresnel));
    return fixed4(col, 1);
}

insert image description here

4. 折射

  1. 使用函数refract(-o.worldViewDir, o.worldNormal, _RefractRotio)获取到反射向量。具体使用方式同上。

  2. 折射系数为1时,效果类似于透镜。
    insert image description here

  3. 折射系数为0时,效果类似于放大镜
    insert image description here

8. RenderTexture镜面效果
  1. 创建自定义渲染纹理
  2. 在目标镜面上添加正交摄像机,设置摄像机目标纹理为刚才创建的自定义渲染纹理,正交尺寸比将转换为自定义渲染纹理的尺寸比
  3. 将自定义渲染纹理赋值给shader的纹理2D,或UGUI的原始图像即可使用。
v2f vert (appdata v)
{
    
    
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = TRANSFORM_TEX(v.uv, _MainTex);
    o.uv.x = 1 - o.uv.x;
    return o;
}

insert image description here

5. GrabPass

1. 介绍
  1. 该通道用于截屏。
  2. 在shader中建立一个特殊的Pass通道GrabPass { "GrabPassTex" },其中的GrabPassTex为图像名称。
  3. 在其他Pass通道中便可命名sampler2D GrabPassTex来使用该图像。
    insert image description here
2. 玻璃效果
  1. 使用函数ComputeGrabScreenPos(o.vertex)计算用于采样GrabPass纹理的纹理坐标。输入为剪辑空间位置。
  2. 使用法线纹理贴图,获取纹理贴图的xy作为偏移量,以形成玻璃扭曲光线的效果。
  3. 将获得GrabPass的位置纹理坐标处于w值获得正确的通道位置。
    insert image description here
// 抓屏通道。抓屏时间取决于Queue,如果在其他shader中使用抓屏应直接调用该pass通道而不要再抓屏。抓屏通道需放到最上方,以免被该对象本身遮蔽
GrabPass {
    
     "GrabPassTex" }

Pass
{
    
    
    CGPROGRAM
    #pragma vertex vert
    #pragma fragment frag

    #include "UnityCG.cginc"
    #include "UnityLightingCommon.cginc"


    struct v2f
    {
    
    
        float4 vertex : SV_POSITION;
        float4 uv : TEXCOORD0;
        float3 tanToWorld0: TEXCOORD1;
        float3 tanToWorld1: TEXCOORD2;
        float3 tanToWorld2: TEXCOORD3;
        float3 worldPos: TEXCOORD4;
        float4 scrPos: TEXCOORD5;
    };

    sampler2D _MainTex;
    float4 _MainTex_ST;
    sampler2D _BumpMap;
    float4 _BumpMap_ST;
    float _BumpScale;
    float4 _Diffuse;
    // 获取抓屏通道的贴图。
    sampler2D GrabPassTex;
    float4 GrabPassTex_ST;
    float _Distortion;
    samplerCUBE _Cubemap;
    float _RefractAmount;

    v2f vert (appdata_tan v)
    {
    
    
        v2f o;
        o.vertex = UnityObjectToClipPos(v.vertex);
        o.uv.xy = TRANSFORM_TEX(v.texcoord, _MainTex);
        o.uv.zw = TRANSFORM_TEX(v.texcoord, _BumpMap);
        fixed3 worldPos = mul(unity_ObjectToWorld, v.vertex);
        fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);
        fixed3 worldTangent = UnityObjectToWorldDir(v.tangent.xyz);
        fixed3 worldBinormal = cross(worldNormal, worldTangent) * v.tangent.w;
        o.tanToWorld0 = float3(worldTangent.x, worldBinormal.x, worldNormal.x);
        o.tanToWorld1 = float3(worldTangent.y, worldBinormal.y, worldNormal.y);
        o.tanToWorld2 = float3(worldTangent.z, worldBinormal.z, worldNormal.z);
        o.worldPos = worldPos;

        // 计算用于采样GrabPass纹理的纹理坐标。输入为剪辑空间位置。
        o.scrPos = ComputeGrabScreenPos(o.vertex);
        return o;
    }

    fixed4 frag (v2f i) : SV_Target
    {
    
    
        fixed4 albedo = tex2D(_MainTex, i.uv.xy);
        fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
        fixed3 viewDir = normalize(UnityWorldSpaceViewDir(i.worldPos));
        fixed4 packedNormal = tex2D(_BumpMap, i.uv.zw);
        fixed3 tangentNormal = UnpackNormal(packedNormal);
        tangentNormal.xy *= _BumpScale;
        fixed3 worldNormal = normalize(mul(float3x3(i.tanToWorld0, i.tanToWorld1, i.tanToWorld2), tangentNormal));
        
        // 切获取纹理偏移以形成玻璃扭曲光线的效果,
        float2 offset = tangentNormal.xy * _Distortion;
        // 距离越远,i.scrPos.z越大,越容易看出来有法偏移。虽然从物理角度讲这并不合理,但从效果来看这样更棒。
        i.scrPos.xy = offset * i.scrPos.z + i.scrPos.xy;
        // 纹理坐标除以深度w以获得正确的空间图像,保证映出的是对应位置的图像
        fixed3 refrCol = tex2D(GrabPassTex, i.scrPos.xy / i.scrPos.w).rgb;

        // 反射天空的颜色
        fixed3 reflCol = texCUBE(_Cubemap, reflect(-viewDir, worldNormal)).rgb * albedo;

        fixed3 color = reflCol * (1 - _RefractAmount) + refrCol * _RefractAmount;
        return fixed4(color, 1);
    }
    ENDCG
}

6. 动画

1. 序列帧动画
  1. 确定序列帧内容宽高,计算播放顺序,操作uv来实现序列动画。
  2. 播放帧的确定:floor(_Time.y * _Speed)
  3. 水平位置的确定:floor(time / _HorAmount) 播放帧整除每行数量即为当前播放行
  4. 垂直位置的确定:time - row * _HorAmount 去掉当前行上面的每一行的内容,剩余的数量即当前列的序号,也是垂直位置。
  5. uv与行列位置均需被行列数量除,为了让uv扩大相应的尺寸。
fixed4 frag (v2f i) : SV_Target
{
    
    
    float time = floor(_Time.y * _Speed);
    float row = floor(time / _HorAmount);
    float column  = time - row * _HorAmount;

    // 一起除是为了让uv扩大相应的尺寸
    float2 uv = i.uv + float2(column, -row);
    uv.x /= _HorAmount;
    uv.y /= _VerAmount;

    fixed4 col = tex2D(_MainTex, uv);
    return col;
}
2. 滚动动画

沿着一个方向移动的动画。
利用_Time的自增长特性,将值附加到uv的某一个轴即可。

v2f vert (appdata_base v)
{
    
    
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = TRANSFORM_TEX(v.texcoord, _MainTex) + float2(_ScrollX, 0) * _Time.y;
    return o;
}
3. 顶点动画

在顶点转化到裁剪坐标前,也就是仍在模型空间下时对其进行动画控制。

v2f vert (appdata_base v)
{
    
    
    v2f o;
    v.vertex.y = v.vertex.y + _Arange * sin(_Time.y * _Speed + v.vertex.x * _Frequency);
    o.vertex = UnityObjectToClipPos(v.vertex);
    o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
    return o;
}

7. 广告牌

1. 介绍

通过视口方向与垂直方向的叉乘,求出此时表面法线和向上方向的垂直的右向量,右向量与法线再次叉乘得到正确的上方向量。
insert image description here

2. 示例
v2f vert (appdata_base v)
{
    
    
    v2f o;
    
    float3 center = float3(0, 0, 0);
    // 模型空间的摄像头坐标即视图方向
    float3 normalDir = normalize(mul(unity_WorldToObject, float4(_WorldSpaceCameraPos, 1)));
    // _Verical将决定面片是否允许y轴旋转
    normalDir.y = normalDir.y * _Verical;
    // 上方方向
    float3 upDir = float3(0, 1, 0);
    float3 rightDir = normalize(cross(normalDir, upDir));
    // 重新计算后的上方
    upDir = normalize(cross(rightDir, normalDir));
    float3 localPos = center + rightDir * v.vertex.x + 
                                upDir * v.vertex.y + 
                                normalDir * v.vertex.z;

    o.vertex = UnityObjectToClipPos(localPos);
    o.uv = TRANSFORM_TEX(v.texcoord, _MainTex);
    return o;
}

8. 水效果

1. 深度贴图采样
  1. 定义变量sampler2D_float _CameraDepthTexture获得摄像机的深度图。

  2. 在顶点函数中计算出点对应的屏幕位置及深度。

    void vert(inout appdata_full v, out Input i)
    {
          
          
        UNITY_INITIALIZE_OUTPUT(Input, i);
            // 计算机屏幕位置
        i.proj = ComputeScreenPos(UnityObjectToClipPos(v.vertex));
        // 记录其z值为视图坐标深度
        COMPUTE_EYEDEPTH(i.proj.z);
    }
    
  3. 获得屏幕空间下摄像机深度值。

    // tex2Dproj: 在tex2D的基础上,将输入的UV xy坐标除以其w坐标。这是将坐标从正交投影转换为透视投影。
    // LinearEyeDepth: Z缓冲区到线性深度
    // 默认获得的值是非线性的(将可视椎体转化为正方体),因此需要转换到线性空间
    // 总之,该操作可获得屏幕空间下该点的摄像机深度值
    half depth = LinearEyeDepth(tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(IN.proj)).r);
    

    insert image description here
    insert image description here

  4. 用摄像机深度值减去当前对象的深度值,完成水深度的采样。

    // 将该点的摄像机深度值减去该对象纹理中该点的深度,获得的值即为该点到视图方向延伸到遮挡处的长度(深度)
    half deltaDepth = depth - IN.proj.z;
    

    insert image description here

注意:

  1. _CameraDepthTexture(摄像机深度纹理)采样的重要依据是 LightMode 为 ShadowCaster 的 Pass 通道,因此若想使用的shader被**_CameraDepthTexture**视作有效的遮挡,你必须为该shader编写 LightMode 为 ShadowCaster 的 Pass 通道(在上文中的投射阴影已经实现过,若懒得手动编写,你也可以直接UsePass "Standard/ShadowCaster"
  2. 或许需要为摄像机开启深度纹理: GetComponent<Camera>().depthTextureMode |= DepthTextureMode.Depth;
2. 浪花
  1. 法线采样,移动法线图像。
  2. 将法线图 (x, y) 颠倒后再次采样,获得更好的法线效果。(注: (x, y) 颠倒将会原本作用于x轴的时间偏移改变为作用到y轴)
  3. 将一次采样的值作为偏移量进行二次采样,将打散法线图像,使得浪花变得更加细腻。
// 波浪法线(二次采样取偏移作为偏移,可将原本的法线打散,表现出更小的浪花)
float4 bumpOffset1 = tex2D(_NormalTex, IN.uv_NormalTex + float2(_WaterSpeed * _Time.y, 0));
float4 bumpOffset2 = tex2D(_NormalTex, float2(-IN.uv_NormalTex.y, -IN.uv_NormalTex.x) + float2(_WaterSpeed * _Time.y, 0));
float2 offset = UnpackNormal(((bumpOffset1 + bumpOffset2) / 2)).xy * _Refract;
float4 bumpColor1 = tex2D(_NormalTex, IN.uv_NormalTex + offset + float2(_WaterSpeed * _Time.y, 0));
float4 bumpColor2= tex2D(_NormalTex, float2(-IN.uv_NormalTex.y, -IN.uv_NormalTex.x) + offset + float2(_WaterSpeed * _Time.y, 0));

o.Normal = UnpackNormal((bumpColor1 + bumpColor2) / 2);
3. 自定义高光与漫发射
#pragma surface surf WaterLight vertex:vert alpha:blend noshadow
fixed4 LightingWaterLight(SurfaceOutput s, half3 lightDir, half3 viewDir, half atten)
{
    
    
    float diffuse = saturate(dot(normalize(lightDir), s.Normal));
    half3 halfDir = normalize(lightDir + viewDir);
    float nh = saturate(dot(halfDir, s.Normal));
    // 高光
    float spec = pow(nh, s.Specular * 128) * s.Gloss;
    fixed4 c;
    c.rgb = (s.Albedo * _LightColor0.rgb * diffuse + _SpecularColor.rgb * spec * _LightColor0.rgb) * atten;
    c.a = s.Alpha + spec * _SpecularColor.a;
    return c;
}
4. 海边的浪花
  1. 越浅的地方浪花偏移越大
  2. 采样噪声图像,用作发现海浪的纹理偏移
  3. 随时间在浪花图像上不断移动采样,浅的地方优先采样右侧,反之越靠左
  4. 利用公式1 - (sin() + 1) / 2)使得海浪获得淡入淡出的渐变
  5. 颜色叠加到原本的漫反射中
// 海边的浪花
// 越浅的地方浪花偏移越大
half waveOffset = 1 - saturate(deltaDepth / _WaveRange);
// 采样噪声图像,用作发现海浪的纹理偏移
fixed4 noiserColor = tex2D(_NoiseTex, IN.uv_NoiseTex);
// 随时间在浪花图像上不断移动采样,浅的地方优先采样右侧,反之越靠左
half waveOffsetAnim = waveOffset + _WaveAmplitude * (sin(_Time.x * _WaveSpeed + noiserColor.r));
// offset为波浪法线的二次采样偏移
fixed4 waveColor = tex2D(_WaveTex, float2(waveOffsetAnim, .5) + offset);
// 形成海浪的淡入淡出效果,* noiserColor.r 是为了让海浪的透明度并非处处相同
waveColor.rgb *= (1 - (sin(_Time.x * _WaveSpeed + noiserColor.r) + 1) / 2) * noiserColor.r;
// 双层效果
fixed4 waveColor2 = tex2D(_WaveTex, float2(waveOffsetAnim + _WaveDelta, 1) + offset);
waveColor2.rgb *= (1 - (sin(_Time.x * _WaveSpeed + _WaveDelta + noiserColor.r) + 1) / 2) * noiserColor.r;

// c 为水体本色
o.Albedo = c + (waveColor.rgb + waveColor2.rgb) * waveOffset;
5. 抓屏扰动
  1. 使用GrabPass{ "GrabPassTex" }抓屏
  2. Get perturbed normals of ocean waves as vertex offsets
  3. Divide the calculated screen position by the view coordinate depth to get the uv value of the screen capture image here, and add it to the vertex offset formed by the perturbed normal.
// 抓屏
float3 normal = UnpackNormal((bumpColor1 + bumpColor2) / 2);
float2 handoffset = normal.xy * _Distortion * GrabPassTex_TexelSize.xy;
fixed3 refrCol = tex2D(GrabPassTex, (handoffset * IN.proj.z + IN.proj.xy) / IN.proj.w).rgb;

o.Albedo = (c + (waveColor.rgb + waveColor2.rgb) * waveOffset) * refrCol;
6. Fresnel reflection
  1. get reflection and refraction vectors
  2. Calculated
fixed3 reflaction = texCUBE(_Cubemap, WorldReflectionVector(IN, normal)).rgb;
fixed fresnel = _FresnelScale + (1 - _FresnelScale) * pow(1 - dot(normalize(IN.viewDir), WorldNormalVector(IN, normal)
), 5);
fixed3 refrAndRefl = lerp(reflaction, refrCol, saturate(fresnel));

o.Albedo = (c + (waveColor.rgb + waveColor2.rgb) * waveOffset) * refrAndRefl;

9. Wetlands effect

1. Principle
  1. The normals of the original ground and the water surface are different, and the metalness, smoothness, and color are different.
    insert image description here
void surf (Input IN, inout SurfaceOutputStandard o)
{
    
    
    // 遮罩纹理决定哪里是水,哪里是地面
    fixed wetness = tex2D(_WetMap, IN.uv_WetMap).r * _Wetness;

    // 颜色取正常颜色与水颜色的过渡
    fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * lerp(_Color, _WetColor, wetness);
    o.Albedo = c.rgb;
    // 法线取正常颜色与水颜色的过渡
    o.Normal = lerp(UnpackScaleNormal(tex2D(_Normal, IN.uv_Normal), _NormalScale), half3(0, 0, 1), wetness);
    // 金属质感过渡
    o.Metallic = lerp(_Metallic, _WetMetallic, wetness);
    // 平滑度过渡
    o.Smoothness = lerp(_Glossiness, _WetGlossiness, wetness);
    o.Alpha = c.a;
}

Guess you like

Origin blog.csdn.net/qq_50682713/article/details/125758881