【Unity URP】手写PBR:从build-in转到URP

写在前面

后续要在URP下实现PBR+NPR的风格化渲染,所以这里要赶紧把之前手写的PBR挪到URP管线下。由于URP各个版本更新换代太快了,贴一下项目环境,给后面看到这篇文章的小伙伴提个醒,我的项目环境:

URP12.1.7

Unity2021.3.8f1


整体框架几乎一样吧,目前先实现主光源的部分,至于多光源的,因为这个从固定管线搬运到URP下是为了后续实现NRP+PBR服务的,想先把主光源的做出来,所以多光源的先搁置,后续再补充完整。

1 基础光照计算

1.1 视差

URP计算视差贴图在这:

void ApplyPerPixelDisplacement(half3 viewDirTS, inout float2 uv)
{
#if defined(_PARALLAXMAP)
    uv += ParallaxMapping(TEXTURE2D_ARGS(_ParallaxMap, sampler_ParallaxMap), viewDirTS, _Parallax, uv);
#endif
}

其中ParallaxMapping函数:

float2 ParallaxMapping(TEXTURE2D_PARAM(heightMap, sampler_heightMap), half3 viewDirTS, half scale, float2 uv)
{
    half h = SAMPLE_TEXTURE2D(heightMap, sampler_heightMap, uv).g;
    float2 offset = ParallaxOffset1Step(h, scale, viewDirTS);
    return offset;
}

紧接着:

half2 ParallaxOffset1Step(half height, half amplitude, half3 viewDirTS)
{
    height = height * amplitude - amplitude / 2.0;
    half3 v = normalize(viewDirTS);
    v.z += 0.42;
    return height * (v.xy / v.z);
}

因此,我们需要把固定管线下的ParallaxOffset换成ParallaxOffset1Step,注意这里传入的是viewDirTS,所以还需要把世界空间下的viewDir转化成切线空间下,当然URP给出了对应的计算公式:

half3 GetViewDirectionTangentSpace(half4 tangentWS, half3 normalWS, half3 viewDirWS)
{
    // must use interpolated tangent, bitangent and normal before they are normalized in the pixel shader.
    half3 unnormalizedNormalWS = normalWS;
    const half renormFactor = 1.0 / length(unnormalizedNormalWS);

    // use bitangent on the fly like in hdrp
    // IMPORTANT! If we ever support Flip on double sided materials ensure bitangent and tangent are NOT flipped.
    half crossSign = (tangentWS.w > 0.0 ? 1.0 : -1.0); // we do not need to multiple GetOddNegativeScale() here, as it is done in vertex shader
    half3 bitang = crossSign * cross(normalWS.xyz, tangentWS.xyz);

    half3 WorldSpaceNormal = renormFactor * normalWS.xyz;       // we want a unit length Normal Vector node in shader graph

    // to preserve mikktspace compliance we use same scale renormFactor as was used on the normal.
    // This is explained in section 2.2 in "surface gradient based bump mapping framework"
    half3 WorldSpaceTangent = renormFactor * tangentWS.xyz;
    half3 WorldSpaceBiTangent = renormFactor * bitang;

    half3x3 tangentSpaceTransform = half3x3(WorldSpaceTangent, WorldSpaceBiTangent, WorldSpaceNormal);
    half3 viewDirTS = mul(tangentSpaceTransform, viewDirWS);

    return viewDirTS;
}

以上函数都单独放在ParallaxMapping.hlsl中了。

1.2 unity_ColorSpaceDielectricSpec

这个参数在URP下是在BRDF.hlsl文件里定义的,名字也不一样:

#define kDielectricSpec half4(0.04, 0.04, 0.04, 1.0 - 0.04) // standard dielectric reflectivity coef at incident angle (= 4%)

用的话注意代替。

1.3 UNITY_PI

注意固定管线下PI是UNITY_PI,是UnityCG.cginc中定义的,而URP下变成了PI,是Macros.hlsl中定义的。

1.4 URP下的间接光

URP下与间接光相关的都在GlobalIllumination.hlsl中,同样是通过SH重构出漫反射环境光,固定管线下提供的是ShadeSH9函数:

half3 ShadeSH9 (half4 normal)
{
    // Linear + constant polynomial terms
    half3 res = SHEvalLinearL0L1 (normal);

    // Quadratic polynomials
    res += SHEvalLinearL2 (normal);

#   ifdef UNITY_COLORSPACE_GAMMA
        res = LinearToGammaSpace (res);
#   endif

    return res;
}

其中, 

// normal should be normalized, w=1.0
half3 SHEvalLinearL0L1 (half4 normal)
{
    half3 x;

    // Linear (L1) + constant (L0) polynomial terms
    x.r = dot(unity_SHAr,normal);
    x.g = dot(unity_SHAg,normal);
    x.b = dot(unity_SHAb,normal);

    return x;
}

URP下的话就变成了:

// Samples SH L0, L1 and L2 terms
half3 SampleSH(half3 normalWS)
{
    // LPPV is not supported in Ligthweight Pipeline
    real4 SHCoefficients[7];
    SHCoefficients[0] = unity_SHAr;
    SHCoefficients[1] = unity_SHAg;
    SHCoefficients[2] = unity_SHAb;
    SHCoefficients[3] = unity_SHBr;
    SHCoefficients[4] = unity_SHBg;
    SHCoefficients[5] = unity_SHBb;
    SHCoefficients[6] = unity_SHC;

    return max(half3(0, 0, 0), SampleSH9(SHCoefficients, normalWS));
}

其中,SampleSH9定义在了EntityLighting.hlsl下:

half3 SampleSH9(half4 SHCoefficients[7], half3 N)
{
    half4 shAr = SHCoefficients[0];
    half4 shAg = SHCoefficients[1];
    half4 shAb = SHCoefficients[2];
    half4 shBr = SHCoefficients[3];
    half4 shBg = SHCoefficients[4];
    half4 shBb = SHCoefficients[5];
    half4 shCr = SHCoefficients[6];

    // Linear + constant polynomial terms
    half3 res = SHEvalLinearL0L1(N, shAr, shAg, shAb);

    // Quadratic polynomials
    res += SHEvalLinearL2(N, shBr, shBg, shBb, shCr);

#ifdef UNITY_COLORSPACE_GAMMA
    res = LinearToSRGB(res);
#endif

    return res;
}

两个管线事实上实现原理是一样的,只不过函数名改了(不理解为什么改这么多!!),这部分影响的是间接光漫反射。我们直接用SampleSH()函数,把间接光漫反射考虑进shader就是一句话的事:

// 间接光照

// 漫反射
i.ambientOrLightmapUV += SampleSH(N); //只考虑球谐
float3 indirectDiffuse = i.ambientOrLightmapUV * occlusion;

 此外,间接光镜面反射计算思路也不变,变了的只是一些函数的写法和用法,比如采样Cubemap的函数成了: SAMPLE_TEXTURECUBE_LOD(),以及URP下没有自定义的FresnelLerp()函数了,需要自己给定(我不确定是没有了还是更换了函数名称)。

解码HDR函数成了:DecodeHDREnvironment()

1.5 最终对比

里面的那个是我写的shader,靠前面的、颜色深一点的是URP下自己的Lit Shader,我发现如果把diffuse项考虑成Lambert(diffuse项我是按Disney方案写的):

效果几乎完全一样(我的没阴影是因为shader里没加入阴影),看来Lit里Diffuse项还是Lambert。

最后贴个我的部分自定义hlsl函数库,仅供参考:

//2023.4.8

// Disney_Diffuse
inline float3 Diffuse_Disney(float roughness, float ndotv, float ndotl, float ldoth){
    float FD90 = 0.5 + 2 * ldoth * ldoth * roughness;
    float FdV = 1 + (FD90 - 1) * pow((1 - ndotv), 5);
    float FdL = 1 + (FD90 - 1) * pow((1 - ndotl), 5);
    return FdV * FdL; // (1/PI)会让着色变黑很多,这里不除以PI
}

// 法线分布函数 D

// D_GGX
inline float DistributionGGX(float ndoth, float squareRoughness){
    float m = ndoth * ndoth * (squareRoughness - 1) + 1;
    return squareRoughness / ((m * m) * PI); // 保证能量守恒
}

// 阴影遮挡函数 G

inline float SchlickGGX(float ndotv, float roughness){
    float r = roughness + 1;
    float m = r * r / 2;
    float k = lerp(ndotv,1,m);
    return ndotv / k;
}
inline float Unity_G(float ndotv, float ndotl, float roughness){
    float ggx1 = SchlickGGX(ndotl, roughness);
    float ggx2 = SchlickGGX(ndotv, roughness);
    return ggx1 * ggx2;
}


// 菲涅尔项 F

// Unity这里传入的是ldoth,而非vdoth
inline float3 Unity_Fresnel(float3 F0, float cosA){
    float a = pow((1 - cosA), 5);
    return (F0 + (1 - F0) * a);
}

// 间接光

inline float3 FresnelLerp (half3 F0, half3 F90, half cosA)
{
    half t = Pow4 (1 - cosA);   // FAST WAY
    return lerp (F0, F90, t);
}

// 间接光镜面反射

// 获取Mip层
inline float CubeMapMip(float _Roughness){
    //基于粗糙度计算CubeMap的mip层
    float mip_roughness = _Roughness * (1.7 - 0.7 * _Roughness); // 拟合
    float mip = mip_roughness * 6; // 给粗糙度划分到0-6这7个level,为lod作准备
    return mip;
}

// 反射探针获取颜色值
inline float3 IndirectSpecularCube(float _Roughness, float3 viewDir, float3 worldNormal, float occlusion){
    float mip = CubeMapMip(_Roughness); // 按粗糙度取mip层
    float3 reflectVec = normalize(reflect(-viewDir, worldNormal)); // 计算采样方向
    float4 rgbm = SAMPLE_TEXTURECUBE_LOD(unity_SpecCube0, samplerunity_SpecCube0, reflectVec, 6); // 采样内部存的一个Cubemap的LOD
    float3 iblSpecular = DecodeHDREnvironment(rgbm, unity_SpecCube0_HDR); // 把颜色从HDR编码下解码
    return iblSpecular * occlusion;
}

至于实现过程,翻一翻之前的在build-in下实现PBR的文章,有写的!

2 URP下接收阴影

【Unity Shader】Unity中阴影映射标准制作流程,在这篇文章里,我大概剖析了一下Unity中实现阴影的方案——屏幕空间阴影映射,在固定管线下的流程是怎么样的:

  • 声明宏,定义阴影纹理采样坐标:SHADOW_COORDS()
  • 计算坐标:TRANSFER_SHADOW()
  • 采样shadowmap:SHADOW_ATTENUATION()

而这些,都包含在一句:

#include "AutoLight.cginc"

因此固定管线下,我们不需要考虑额外的关键字。

但URP讲究的是把所有用到的东西都写出来,不是Build-in那样只需要include一个简单的cg文件就能把一些自定函数全部包括进去,所以需要刨根问底,把需要添加进去的东西都搞清楚。

我觉得这样挺好,因为要想写好URP就必须把所有关键字定义清楚!虽然代码看上去篇幅非常大了,但更有助于我们理解原理~

仔细看的话,URP的guide文件给我们做了实现阴影的指引:

## Sampling shadows from the Main Light

In previous versions of URP, if shadow cascades were enabled for the main Light, shadows would be resolved in a screen space pass. The pipeline now always resolves shadows while rendering opaque or transparent objects. This allows for consistency and solved many issues regarding shadows.

If have custom HLSL shaders and sample `_ScreenSpaceShadowmapTexture` texture, you must upgrade them to sample shadows by using the `GetMainLight` function instead.

For example:

```

float4 shadowCoord = TransformWorldToShadowCoord(positionWorldSpace);

Light mainLight = GetMainLight(inputData.shadowCoord);

// now you can use shadow to apply realtime occlusion

half shadow = mainLight.shadowAttenuation;

```

You must also define the following in your .shader file to make sure your custom shader can receive shadows correctly:

```

#pragma multi_compile _ _MAIN_LIGHT_SHADOWS

#pragma multi_compile _ _MAIN_LIGHT_SHADOWS_CASCADE

```

大概意思就是,自定义shader里实现shadow首先需要定义两个关键字:

#pragma multi_compile _ _MAIN_LIGHT_SHADOWS

#pragma multi_compile _ _MAIN_LIGHT_SHADOWS_CASCADE

 才能通过给定的API去获得阴影坐标:

float4 shadowCoord = TransformWorldToShadowCoord(positionWorldSpace);

接着就跟固定管线里的UNITY_LIGHT_ATTENUATION计算阴影信息一样,通过这个阴影坐标进行阴影相关计算,不同的是我们要调取GetMainLight这个API去计算:

Light mainLight = GetMainLight(inputData.shadowCoord);

然后通过访问mainLight的信息去获取阴影信息:

half shadow = mainLight.shadowAttenuation;

下面我们看看TransformWorldToShadowCoord()、GetMainLight()这两个API具体干了什么。

2.1 TransformWorldToShadowCoord

这个API是定义在Shadow.hlsl下的,用以输入世界空间下的坐标,生成阴影坐标:

float4 TransformWorldToShadowCoord(float3 positionWS)
{
#ifdef _MAIN_LIGHT_SHADOWS_CASCADE
    half cascadeIndex = ComputeCascadeIndex(positionWS);
#else
    half cascadeIndex = half(0.0);
#endif

    float4 shadowCoord = mul(_MainLightWorldToShadow[cascadeIndex], float4(positionWS, 1.0));

    return float4(shadowCoord.xyz, 0);
}

需要定义关键字,才能触发阴影坐标生成,于是shader中我们要加上关键字:

#pragma multi_compile _MAIN_LIGHT_SHADOWS_CASCADE // 使用TransformWorldToShadowCoord,获得正确的阴影坐标

2.2 GetMainLight

继续打开VS Code,搜索GetMainLight,在CHANGELOG文件下有以下定义:

[Shader API] The `GetMainLight` and `GetAdditionalLight` functions can now compute shadow attenuation and store it in the new `shadowAttenuation` field in `LightData` struct.

我们暂时只关注主光源的阴影,该文件下对GetMainLight()也做了解释:

-GetMainLight() is provided in shader to initialize Light struct with main light shading data.

看看这个函数本体,定义在RealtimeLights.hlsl下:

Light GetMainLight(float4 shadowCoord)
{
    Light light = GetMainLight();
    light.shadowAttenuation = MainLightRealtimeShadow(shadowCoord);
    return light;
}

其中,

half MainLightRealtimeShadow(float4 shadowCoord)
{
#if !defined(MAIN_LIGHT_CALCULATE_SHADOWS)
    return half(1.0);
#elif defined(_MAIN_LIGHT_SHADOWS_SCREEN) && !defined(_SURFACE_TYPE_TRANSPARENT)
    return SampleScreenSpaceShadowmap(shadowCoord);
#else
    ShadowSamplingData shadowSamplingData = GetMainLightShadowSamplingData();
    half4 shadowParams = GetMainLightShadowParams();
    return SampleShadowmap(TEXTURE2D_ARGS(_MainLightShadowmapTexture, sampler_MainLightShadowmapTexture), shadowCoord, shadowSamplingData, shadowParams, false);
#endif
}

这个函数就是拿传入的阴影坐标去采样shadowmap,获得阴影信息,储存在shadowAttenuation下,是以关键字MAIN_LIGHT_CALCULATE_SHADOWS去判断的,这就说通了为什么上面提到的过程必须要定义这个关键字:

#pragma multi_compile _ _MAIN_LIGHT_SHADOWS_CASCADE

2.3 Shader中实践

定义关键字:

            #pragma multi_compile _ _MAIN_LIGHT_SHADOWS       // 接受阴影
            #pragma multi_compile _MAIN_LIGHT_SHADOWS_CASCADE // 生成阴影坐标
            #pragma multi_compile_fragment _ _SHADOWS_SOFT    // 软阴影(非必须)

struct v2f结构体加入:

float4 shadowCoord : TEXCOORD2;  // 阴影坐标

顶点着色器计算:

o.shadowCoord = TransformWorldToShadowCoord(positionWS); // 生成阴影坐标

片元着色器计算:

                // 计算阴影
                Light shadowLight = GetMainLight(i.shadowCoord);                // 计算阴影衰减
                float shadow = shadowLight.shadowAttenuation;                 // 获取                // 获取

效果如下:其中球用的是Lit,箱子是自己的PBR shader:

开启soft

顺便对比一下没有开启软阴影的效果:

没开启soft(即默认为hard)

最后还在面板加了个Toggle,方便控制:

3 URP下投射阴影

3.1 实现1:直接UsePass

直接在shader后面Use内置的shadowcaster的Pass:

文件中是这样解释的:

| **ShadowCaster** | The Pass renders object depth from the perspective of lights into the Shadow map or a depth texture. |

我们在Lit shader找到这个Pass:

        Pass
        {
            Name "ShadowCaster"
            Tags{"LightMode" = "ShadowCaster"}

            ZWrite On
            ZTest LEqual
            ColorMask 0
            Cull[_Cull]

            HLSLPROGRAM
            #pragma exclude_renderers gles gles3 glcore
            #pragma target 4.5

            // -------------------------------------
            // Material Keywords
            #pragma shader_feature_local_fragment _ALPHATEST_ON
            #pragma shader_feature_local_fragment _SMOOTHNESS_TEXTURE_ALBEDO_CHANNEL_A

            //--------------------------------------
            // GPU Instancing
            #pragma multi_compile_instancing
            #pragma multi_compile _ DOTS_INSTANCING_ON

            // -------------------------------------
            // Universal Pipeline keywords

            // This is used during shadow map generation to differentiate between directional and punctual light shadows, as they use different formulas to apply Normal Bias
            #pragma multi_compile_vertex _ _CASTING_PUNCTUAL_LIGHT_SHADOW

            #pragma vertex ShadowPassVertex
            #pragma fragment ShadowPassFragment

            #include "Packages/com.unity.render-pipelines.universal/Shaders/LitInput.hlsl"
            #include "Packages/com.unity.render-pipelines.universal/Shaders/ShadowCasterPass.hlsl"
            ENDHLSL
        }

搬过来的话,确实实现了:

但有个问题,,会打断合批!!!

而且我觉得直接搬过来Lit.shader的Pass太偷懒了!我们实现的不就是Lit.shader的效果吗!

于是——自己实现吧!

3.2 实现2:自己写

这部分思路参考了:[URP 学习记录] 阴影的接收与生成 - 知乎 (zhihu.com)

urp管线的自学hlsl之路 第十二篇 ShadowCaster和SRP batcher - 哔哩哔哩 (bilibili.com)

我们先来看看shadowCaster.hlsl内容:

#ifndef UNIVERSAL_SHADOW_CASTER_PASS_INCLUDED
#define UNIVERSAL_SHADOW_CASTER_PASS_INCLUDED

#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Shadows.hlsl"

// Shadow Casting Light geometric parameters. These variables are used when applying the shadow Normal Bias and are set by UnityEngine.Rendering.Universal.ShadowUtils.SetupShadowCasterConstantBuffer in com.unity.render-pipelines.universal/Runtime/ShadowUtils.cs
// For Directional lights, _LightDirection is used when applying shadow Normal Bias.
// For Spot lights and Point lights, _LightPosition is used to compute the actual light direction because it is different at each shadow caster geometry vertex.
float3 _LightDirection;
float3 _LightPosition;

struct Attributes
{
    float4 positionOS   : POSITION;
    float3 normalOS     : NORMAL;
    float2 texcoord     : TEXCOORD0;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct Varyings
{
    float2 uv           : TEXCOORD0;
    float4 positionCS   : SV_POSITION;
};

float4 GetShadowPositionHClip(Attributes input)
{
    float3 positionWS = TransformObjectToWorld(input.positionOS.xyz);
    float3 normalWS = TransformObjectToWorldNormal(input.normalOS);

#if _CASTING_PUNCTUAL_LIGHT_SHADOW
    float3 lightDirectionWS = normalize(_LightPosition - positionWS);
#else
    float3 lightDirectionWS = _LightDirection;
#endif

    float4 positionCS = TransformWorldToHClip(ApplyShadowBias(positionWS, normalWS, lightDirectionWS));

#if UNITY_REVERSED_Z
    positionCS.z = min(positionCS.z, UNITY_NEAR_CLIP_VALUE);
#else
    positionCS.z = max(positionCS.z, UNITY_NEAR_CLIP_VALUE);
#endif

    return positionCS;
}

Varyings ShadowPassVertex(Attributes input)
{
    Varyings output;
    UNITY_SETUP_INSTANCE_ID(input);

    output.uv = TRANSFORM_TEX(input.texcoord, _BaseMap);
    output.positionCS = GetShadowPositionHClip(input);
    return output;
}

half4 ShadowPassFragment(Varyings input) : SV_TARGET
{
    Alpha(SampleAlbedoAlpha(input.uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap)).a, _BaseColor, _Cutoff);
    return 0;
}

#endif

参考着写一个自己的Pass,主要是复述一次GetShadowPositionHClip(),其中ApplyShadowBias这个函数是对阴影做抗锯齿处理,是定义在shadow.hlsl下的,我们直接挪过来用!

float3 ApplyShadowBias(float3 positionWS, float3 normalWS, float3 lightDirection)
{
    float invNdotL = 1.0 - saturate(dot(lightDirection, normalWS));
    float scale = invNdotL * _ShadowBias.y;

    // normal bias is negative since we want to apply an inset normal offset
    positionWS = lightDirection * _ShadowBias.xxx + positionWS;
    positionWS = normalWS * scale.xxx + positionWS;
    return positionWS;
}

还有一点要注意的,这个Pass为了参与进整个管线的进程,lightMode一定要是ShadowCaster,这样在Lighting面板才能控制是否Cast Shadow:

 最后贴个shader主要部分:

            v2f vert_shadow (a2v v)
            {
                v2f o;
                o.uv = TRANSFORM_TEX(v.uv, _BaseTex);
                float3 positionWS = TransformObjectToWorld(v.positionOS.xyz);
                o.normalWS.xyz = normalize(TransformObjectToWorldNormal(v.normalOS.xyz));
                Light mainLight = GetMainLight();
                o.positionCS = TransformWorldToHClip(ApplyShadowBias(positionWS.xyz, o.normalWS.xyz, mainLight.direction));
                // copy一下GetShadowCoord对y的设置:
                #if UNITY_REVERSED_Z
                    o.positionCS.z = min(o.positionCS.z, UNITY_NEAR_CLIP_VALUE);
                #else
                    o.positionCS.z = max(o.positionCS.z, UNITY_NEAR_CLIP_VALUE);
                #endif
                return o;
            }

大功告成,SRP Batch也是支持的:

猜你喜欢

转载自blog.csdn.net/qq_41835314/article/details/129991046