Study Notes: Physical Rendering-Indirect Lighting

        This article was written before the topic. It has been almost a year since the last article. I feel that I have not achieved much this year. Most of the content of this article was written in December last year. Here Just an extension of the content of this article.

        In the previous article, we derived the Cook-Torrence formula and implemented basic direct lighting, but for real-time rendering scenes, this is far from enough, because the light source situation in the scene does not exist alone. There are many places where light sources are reflected. This article only introduces the principles of PRT, and then lists the formulas briefly. Regarding the approximation of spherical harmonics in real-time rendering, IBL methods will list formulas and codes.

Irradiance Based Lighting

        For a point (pixel) on the surface of an object, the ambient illumination it receives is affected by the light coming from all directions. It is nothing more than an integral of the illumination, which represents the light coming from all directions, and then multiplied by the light at that point. BRDF on, and the weight of the light color at the point obtained using cosθ:

                                                  L_{o}=\int_{\Omega}^{} L_{i}(\omega _{i}) f_{r}(\omega _{i},\omega _{o}) cos\theta d\omega _{i}

         This formula seems to require an integration for each point calculation, or in other words, starting from that point, emitting rays in all directions to detect its color, but this cannot be done in real-time rendering, but for the scene. The environment (CubeMap) is pre-calculated to obtain its results. For diffuse reflection (Diffuse) and specular reflection (Glossy), the results are calculated in advance so that the correct results can be obtained by sampling in real-time rendering.

        We write the Fr above, the BRDF formula, into its complete form:

                                       L_{o}=\int_{\Omega}^{}(\frac{c}{\pi}+\frac{DFG}{4(n\cdot \omega _{i})(n\cdot \omega _{o})}) L_{i}(\omega _{i}) cos\theta d\omega _{i}

        It is obvious that Diffuse and Glossy can be calculated separately.

Diffuse pre-integrated map

        After taking out the diffuse reflection separately, we can get the integral form of diffuse reflection:

                                                 L_{o}=\int_{\Omega}^{}\frac{c}{\pi} L_{i}(\omega _{i}) cos\theta d\omega _{i}

        For each point of the object, the sampling direction is the result of sampling on the hemisphere with the normal of the point as the main axis. The incident ωi needs to be converted into spherical coordinates. This spherical coordinate is based on the point of the object as the center of the spatial coordinate. We can regard it as an ordinary hemisphere, and then convert it to the world space through spatial transformation during calculation (a world space with the center of a ball as the coordinate axis). ). The solid angle ωi of the Diffuse integral of the above formula is converted into spherical coordinates as follows:

                                     L_{o}=\frac{c}{\pi}\int_{0}^{2\pi}\int_{0}^{\frac{\pi }{2}} L_{i }(\theta ,\phi) cos\theta sin\theta d\phi d\theta

        Remove the part of the BRDF.

                                  Irradiance=\int_{0}^{2\pi}\int_{0}^{\frac{\pi }{2}} L_{i }(\theta ,\phi) cos\theta sin\theta d\phi d\theta

        Because of the conversion to spherical coordinates, the area elements of each solid angle on the spherical surface are:dω=sinθdφ*dθ.

        The relationship between φ and θ in spherical coordinates here is obtained by the ratio of their arc lengths on the spherical surface. Since the arc length of θ corresponds to the radius of the sphere R. The arc length of φ corresponds to the radius of the sphere sinθR. Therefore, the area of ​​the spherical microelement multiplied by the arc lengths of the two should also have a coefficient of sinθ. Furthermore, there is an extra coefficient of sinθ in the integral.

        Of course, to understand it from another perspective, sinθ can be regarded as a corresponding weight of φ as the latitude increases, to represent the change in the micro-element radian of φ as the latitude increases.

        The solution to this equation is nothing more than two methods, 1. Use Monte Carlo integral calculation. 2. Use Riemann sum calculation. This article uses the Riemann sum to calculate the integral here. This method requires sampling as many discrete samples as possible within the hemisphere and then multiplying them by the weight corresponding to the sample, and finally summing them up. The integral can be written as:                                          ​                                                                                                                          Irradiance=(2\pi-0)(\frac{\pi}{2}-0)\frac{1 }{n1*n2}\sum_{\phi=0}^{n1}\sum_{\theta=0}^{n2} L_{i }(\theta_{i} ,\phi_{j}) cos\theta_{i} sin\theta_{i} d\phi_{j} d\theta_{i}  

        This Riemann sum can be found directly in Shader.

        At the same time, when calculating θi and φj in each direction of the sphere, it is necessary to convert the two-dimensional UV into three-dimensional spherical coordinates to calculate the sampling integral. The output value is converted into a panorama and saved, and can be converted into a CubeMap when used. We can save the panorama in advance with a RenderTexture whose image width and height are equal to the width of one side of the six-sided image (width*2, width), and then save it using the Graphic.Bilt function:

    public void irradianceMapRenderer()
    {
        RenderTexture RT1 = new RenderTexture(targetCube.width * 2, targetCube.width, 0, RenderTextureFormat.ARGBFloat);
        RT1.wrapMode = TextureWrapMode.Repeat;
        RT1.Create();
        irradianceMaterial = new Material(irradianceShader);
        irradianceMaterial.SetTexture("_cube", targetCube);

        Graphics.Blit(null, RT1, irradianceMaterial, 1);

        xx++;
        Texture2D resultTexture = new Texture2D(targetCube.width * 2, targetCube.width, TextureFormat.RGBAFloat, true);
        RenderTexture current = RenderTexture.active;
        RenderTexture.active = RT1;
        resultTexture.ReadPixels(new Rect(0, 0, RT0.width, RT0.height), 0, 0);
        RenderTexture.active = current;
        //这里实际上进行了一次Temp操作

        byte[] TexBinary = resultTexture.EncodeToEXR(Texture2D.EXRFlags.OutputAsFloat);
        File.WriteAllBytes(Application.dataPath + "/BlogShader/BRDF/SH&Probe/NEWresultTexture" + xx + ".exr", TexBinary);

        RT1.Release();
        return;
    }

       The principle of converting two-dimensional UV coordinates into three-dimensional spherical coordinates is as follows. For the horizontal axis X and vertical axis Y of UV, the value range is [0,1], while for spherical coordinates, the value range of The value range is [-π, π] (a whole arc), the value range of θ is [0, π] (half an arc), map X to φ, Y to θ, and directly do a linear Just map it. The relationship is as follows:

  • Mapping from uv.x([0,1]) to φ([-π,π]), we have\phi=2\pi *uv.x-\pi
  • Mapping from uv.y([0,1]) to θ([0, π]), there is\theta=\pi *uv.y

        At the same time, because the value of the Y component of uv is based on the difference between DirectX and OpenGL, the initial coordinate of the Y axis may be the upper left corner when calculating, so in actual calculation, the value of the Y axis needs to be inverted, that is, subtracted from 1.0 That’s it. Moreover, due to the use of the left-handed coordinate system, when using θ and φ to represent the spherical rectangular coordinate system, there are some detailed changes in the conversion, mainly the Y axis becomes the axis where the elevation angle θ is located, and the Z axis is replaced with it. Therefore, the Y-axis component and the Z-axis component of the rectangular coordinate of the spherical coordinate conversion are also interchanged. In a left-handed coordinate system, there are coordinate transformations:

  • Spherical vector X component=sin(θ)*cos(φ)
  • Y component of spherical vector = cos(θ)
  • Spherical vector Z component=sin(θ)*sin(φ)

        Then the function that converts the two-dimensional UV into the three-dimensional spherical vector is written as:

        //将二维的UV全景图坐标转为三维的球面坐标
        float3 UV2normal(float2 uv)
        {
            float3 result;
            float fai=uv.x*PI*2-PI;
            float theta=(1-uv.y)*PI;
            //注意这里的是世界空间的左手坐标系
            result.x=sin(theta)*cos(fai);
            result.y=cos(theta);
            result.z=sin(theta)*sin(fai);

            result=normalize(result);
            return result;
        }

        At the same time, in the world space centered on the sphere, the Y-axis component is (0,1,0). A vector normal perpendicular to the sphere is known. We regard this normal as a base axis in the vertex space. Then the tangent line of another base axis can be obtained by the cross product of normal and the Y axis, and then the last base axis bionormal can be obtained based on the cross product of these two base axes. The three basic axes that form the basic vertex space are then used to form the TBN matrix, and the vectors calculated one by one of the Riemann sum are transferred to the world space for sampling. (The principle here is exactly the same as that of transferring the normal map into the world space, except that the normal vector and tangent vector are not obtained through obj information)

        Originally, I thought that two passes were needed to achieve interchange. The first pass converted the CubeMap into a panorama, and the second pass sampled the panorama. Later, I found that this was not necessary and directly transferred the three-dimensional coordinates converted from UV. World space sampling CubeMap is enough. The complete code is:

Shader "Hidden/irradianceShader"
{
    Properties
    {
        _MainTex("Texture",2D)="white"{}
        _cube("Reflect CubeMap",Cube)="_Skybox"{}
    }
    SubShader
    {
        CGINCLUDE
        #define PI 3.1415926535898
        #include "UnityCG.cginc"
        sampler2D _MainTex;
        samplerCUBE _cube;

        struct appdata
        {
            float4 vertex : POSITION;
            float2 uv : TEXCOORD0;
        };

        struct v2f
        {
            float2 uv : TEXCOORD0;
            float4 vertex : SV_POSITION;
        };

        v2f vert (appdata v)
        {
            v2f o;
            o.vertex = UnityObjectToClipPos(v.vertex);
            o.uv = v.uv;
            o.uv.x = 1 - o.uv.x;
            return o;
        }

        //将二维的UV全景图坐标转为三维的球面坐标
        float3 UV2normal(float2 uv)
        {
            float3 result;
            float fai=uv.x*PI*2-PI;
            float theta=(1-uv.y)*PI;
            //注意这里的是世界空间的左手坐标系
            result.x=sin(theta)*cos(fai);
            result.y=cos(theta);
            result.z=sin(theta)*sin(fai);

            result=normalize(result);
            return result;
        }

        //将三维的球面向量转为二维的UV全景图坐标
        float2 normal2UV(float3 normal)
        {
            float2 uv;
            uv.y=1.0-acos(normal.y)/PI;
            uv.x=atan2(normal.z,normal.x)/PI*0.5+0.5;
            return uv;
        }

        fixed4 fragSampler(v2f i):SV_TARGET
        {
            float3 irradiance=float3(0,0,0);
            float3 normal=UV2normal(i.uv);
            float3 tangent=float3(0,1,0);
            tangent=normalize(cross(tangent,normal));
            //切线是ObjSpaceY轴与法线叉乘得到的球面相切的方向
            float3 bionormal=normalize(cross(normal,tangent));
            //获得三个轴的朝向

            float sampleDelta=0.025;
            float sampleCount1=0.0;
            float sampleCount2=0.0;

            for(float phi=0.0;phi<2.0*PI;phi+=sampleDelta)
            {
                for(float theta=0.0;theta<0.5*PI;theta+=sampleDelta)
                {
                    float3 tangentSample=float3(sin(theta)*cos(phi),sin(theta)*sin(phi),cos(theta));
                    //构建当前角度的一个标准的球面坐标(r=1)
                    float3 sampleVec=tangentSample.x*tangent+tangentSample.y*bionormal+tangentSample.z*normal;
                    //从切线空间空间转换到世界空间的矩阵,列向量分别是(T,B,N)
                    irradiance+=texCUBE(_cube,sampleVec).rgb*cos(theta)*sin(theta);
                    sampleCount1++;
                }
            }
            float wight=PI/(sampleCount1);
            return float4(irradiance*wight,0);
        }
        ENDCG
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment fragSampler
            ENDCG
        }
    }
}

       From this method, you can get a panoramic view of CubeMap's IrradianceMap, and you can generate the corresponding CubeMap through some logic (or plug-ins), which I won't go into details here. The panorama of the initial image and the resulting panorama are as follows:

         The input CubeMap and output CubeMap results are as follows:

                              

        Although the logic of using the above method to render IrradianceMap is simple, for a complex lighting scene, when calculating the lighting of objects in the scene, not only the CubeMap of the scene lighting needs to be stored, but also an additional CubeMap is needed to store the Irradiance information. For some light It is not a good method for measuring the platform, so PRT introduces a basis function to represent the low-frequency Irradiance lighting situation.

Pre-integration Diffuse using spherical harmonic basis functions

        First of all, we can know that because the lighting environment in the scene is relatively complex, the environment sphere, CubeMap, is introduced to describe the overall effect of the environment in the scene. As can be seen from the formula above, if you want to calculate the specific situation of a point receiving environmental illumination, you need to precalculate the integral of the environmental illumination and then sample the integral result in real time. However, if the current scene is dynamic, or the value has changed Offset, you need to pre-integrate multiple CubeMap to meet the needs. Especially when the CubeMap is rotated, it is really a very expensive method to render once for each angle of the ambient lighting to be used. Therefore, in PRT, basis functions are used to replace the original lighting integral and low-frequency environmental lighting information. Each sampling point of the object only needs to save a certain number of basis functions, and it also supports rotating environmental lighting information at any time. The basis functions here are not limited to the common spherical harmonic basis functions. There are also wavelet basis functions or spherical Gaussian basis functions that are also used in graphics.

        When understanding and implementing the spherical harmonics (Sphere Harmonics) function, we do not need to put out too many mathematical formulas to deconstruct it. It is not beneficial to us to introduce too many mathematical concepts and formulas. Understand the implementation of spherical harmonic approximation. We only need to know that, just like the Fourier transform, high-frequency, complex functions can be approximately expressed by the sum of several low-frequency, simple basis functions (classic trigonometric functions sin and cos). The spherical harmonic basis functions are also derived from Starting from this principle, it is just a set of basis functions defined on the sphere.We use several spherical harmonic basis functions and original functions to obtain the weight coefficients corresponding to these basis functions, and then In real-time calculation, only the accumulation of these basis functions multiplied by the weights can be obtained to obtain an approximation of the original function.

         As can be seen from the name of the spherical harmonic basis function, it is defined as a set of basis functions on the sphere, then its input values ​​are the spherical coordinates θ and φ (or the three-dimensional coordinates of the spherical coordinates) ). It is written as Y_{m}^{l}, where l represents the number of layers (frequency coefficients) where the basis function is located, counting from order 0. m represents the coefficient of the order l of which the basis function is located, and exists with , and each layer has basis functions. At the same time, for n layers, the number of spherical harmonic basis functions is pieces. The spherical harmonic basis function can be written as:l-l \leq m \leq l2l+1n^{2}Y_{m}^{l}

                                                Y_{l}^{m}(\theta,\phi)=K_{l}^{m}e^{im\phi}P_{l}^{|m|}(cos\theta)

         P_{l}^{m} is the associated Legendre polynomial, and K_{l}^{m} is the normalization factor, its value is the same as that of the current basis function:l is related tom

                                                 K_{l}^{m}=\sqrt{\frac{(2l+1))}{4\pi}*\frac{(l-|m|)!}{(l+|m|)!}}

        Through simple transformation, the spherical harmonic basis function can be divided into: according to m

                                Y_{l}^{m}(\theta,\phi)=\left\{\begin{matrix} \sqrt{2}K_{l}^{m}cos(m\phi)P_{l}^{m}(cos\theta) \;\; \; \; \; \; \; \; \; \; m>0 \\ \sqrt{2}K_{l}^{m}cos(-m\phi)P_{l}^{-m}(cos\theta) \;\; \; \; \; \; m<0 \\ K_{l}^{0}P_{l}^{0}(cos\theta) \;\; \; \; \; \; \; \; \; \; \; \; \ \; \; \; \; \ \; \; \; \; \ \; \; \; \; \; \; \; \;m=0 \end{matrix}\right.

        How to derive the spherical harmonic basis functions corresponding to a certain l and m is not the focus of our use of it, because their internal calculation methods have been determined. Its value is determined only based on the input θ and φ. We can query the logic of the spherical harmonic basis function of each order from here (coordinate system in Wikipedia The default is the right-handed coordinate system). Enter the value you need to calculate the result of any basis function. On the Internet, the images describing the spherical harmonic basis functions are all as follows: 

        In the image, green is a positive value, red is a negative value, and the farther away from the center of the sphere, the greater the absolute value. We can understand that in which direction the spherical harmonic basis function protrudes, it means that there should be something in the direction of the sphere. Value, the degree of prominence represents the size of the value. And it is obvious that as the order increases, the distribution of spherical harmonics becomes more and more complex and the directions become more and more diverse.

        For example, for the zero-order spherical harmonic basis function, it is a pure-color sphere (the pure-color green ball in the upper right corner of layer 0), which means that the first-order value in any direction on the sphere is Consistently, judging from the actual spherical harmonic values ​​(the first-order spherical harmonic value is \frac{1}{2}*\sqrt{\frac{1}{\pi}}​), the value of the zero-order spherical harmonic basis function is indeed a fixed value. For first-order, second-order or even multiple orders, we can know that the multiple basis functions within each order have different values ​​and different directions.

        Following the values ​​of the spherical harmonic basis functions obtained above, we can simply write the first four layers and 16 spherical harmonic functions:

    List<double> BasisY(Vector3 pos)
    {
        List<double> Y = new List<double>(degree);
        Vector3 normal = Vector3.Normalize(pos);
    	float x = normal.x;
    	float y = normal.y;
    	float z = normal.z;

    	if (degree >= 1)
    	{
    		Y.Add(1.0f / 2.0f * Mathf.Sqrt(1.0f / Mathf.PI));
    	}
    	if (degree >= 4)
    	{
    		Y.Add(Mathf.Sqrt(3.0f / (4.0f * Mathf.PI)) * z);
    		Y.Add(Mathf.Sqrt(3.0f / (4.0f * Mathf.PI)) * y);
    		Y.Add(Mathf.Sqrt(3.0f / (4.0f * Mathf.PI)) * x);
    	}
    	if (degree >= 9)
    	{
    		Y.Add(1.0f / 2.0f * Mathf.Sqrt(15.0f / Mathf.PI) * x * z);
    		Y.Add(1.0f / 2.0f * Mathf.Sqrt(15.0f / Mathf.PI) * z * y);
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(5.0f / Mathf.PI) * (-x * x - z * z + 2 * y*y));
    		Y.Add(1.0f / 2.0f * Mathf.Sqrt(15.0f / Mathf.PI) * y * x);
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(15.0f / Mathf.PI) * (x * x - z * z));
    	}
    	if (degree >= 16)
    	{
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(35.0f / (2.0f * Mathf.PI)) * (3 * x * x - z * z) * z);
    		Y.Add(1.0f / 2.0f * Mathf.Sqrt(105.0f / Mathf.PI) * x * z * y);
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(21.0f / (2.0f * Mathf.PI)) * z * (4 * y * y - x * x - z * z));
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(7.0f / Mathf.PI)*y*(2 * y*y - 3 * x*x - 3 * z * z));
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(21.0f / (2.0f * Mathf.PI)) * x * (4 * y * y - x * x - z * z));
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(105.0f / Mathf.PI) * (x * x - z * z) * y);
    		Y.Add(1.0f / 4.0f * Mathf.Sqrt(35.0f / (2 * Mathf.PI)) * (x * x - 3 * z * z) * x);
    	}
        return Y;
    }

        ​ ​ ​ Moreover, spherical harmonic basis functions have two very important properties, which we will apply below:

  • Rotation invariance: The spherical harmonic basis function is defined on the sphere, so the rotation of any spherical harmonic basis can be expressed by the linear expression of the basis function of the same order. Moreover, for scenes where lighting information is rotated, rotating lighting is equal to rotating spherical harmonic basis functions. This prevents the lighting results from tearing, aliasing and other abnormal phenomena when the lighting in the scene is offset. This is also one of the important reasons for the introduction of spherical harmonic basis functions in real-time rendering. For example, for a vector s, its spherical harmonic function is expressed as f(s). Rotating the input vector s by the angle Q(vector) has the same effect as rotating the spherical harmonic function: f(Q(s))=Q(f(s))
  • Orthogonal completeness: Spherical harmonics are orthogonal to each other. The integral of the product of two different basis functions is 0, only when the two basis functions are the same. The integral of the product is 1. (There is a metaphor here in Teacher Yan Lingqi’s class. We can understand the basis functions inside each layer as similar to the basic axes XYZ of the three-dimensional space. These three basic axes only have value when projected onto themselves, and when projected onto Because they are perpendicular to each other on other axes, the projection result must be 0.) Similarly, it is considered to be written as the following formula:

                                \int_{\Omega}Y_{l}^{m}(\omega )*Y_{k}^{n}(\omega )=\left\{\begin{matrix} 1 \, \, \, \, \, \, \, \, \, \, \, \, \, m==n\, \, and \, \, k==l\\ 0 \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \ m!=n\, \, or \, \, k!=l \end{matrix}\right.

        At the same time, the spherical harmonic basis function can also be written in one-dimensional form, just by changing the writing method. The relationship between Y_{l}^{m}(\theta , \phi )=Y_{i}(\theta , \phi )​, l​, m​ and i​ is:​ a>​ to express the basis function more simply. For any function F(x) (this function is also a function that we need to approximate with basis functions. As for the input, it can be θ and φ, or it can be their value x converted into a rectangular coordinate system), N basis functions can be used Bi is multiplied by the corresponding weight coefficient Ci to get:i=(l+1)*l+m​. In the following, we use the form Y_{i}

                                          f(\theta,\phi)=\sum_{l=0}^{\infty } \sum _{m=-l}^{l }c_{l}^{m}*Y_{l}^{m}(\theta,\phi) =\sum_{i=1}^{n^{2} } c_{i}*Y_{i}(\theta,\phi)

        The more layers of N, the closer it can be to the original properties of F(x). The basis function only needs to be calculated according to the output value using the fixed logic on the corresponding layer. Instead, calculating the corresponding weight of each basis function is the bulk of the calculation. For the weight coefficient Ci of the spherical harmonic basis function, In other words, its relationship with the basis function and any function F(x) that needs to be restored is:

                                                        ​​c_{i}=\int_{\Omega }^{ } f(x)*Y_{i}(x)

        The process of generating linear combinations of several basis functions and weight coefficients from any function f(x) is called Projection. Correspondingly, the process of multiplying the basis function by the corresponding coefficient to approximate the original function is called Reconstruction. It can be known from this relationship that for the weight coefficient Ci corresponding to the i-th spherical harmonic basis function Bi, its value is equal to the integral result of the original function multiplied by its corresponding basis function on the sphere. This is also one of the key points when we use spherical harmonics to calculate DiffuseIrradianceMap.

Precomputed ray transfer based on spherical harmonics (Precomputed Radiance Transfer.PRT)

        In the scene, it is assumed that there is no self-illumination at a certain point on the object, and the ambient illumination received is infinite. The values ​​of the illumination received include the incident light, BRDF, the visibility of the point and the integral of the balance term cosθ:

                                               L_{o}=\int_{\Omega}^{} L_{i}(\omega _{i}) V(\omega _{i})f_{r}(\omega _{i},\omega _{o}) cos\theta d\omega _{i}

        ​ WhereV(\omega _{i})​ represents the visibility of the current point in the ωi direction. For the concave surface of the object, this function is needed to affect the light exposure of the point, while for the convex surface of the object, it can Hide this item. In the idea of ​​PRT, the spherical integral of Lo is divided into two parts,One is Li(ωi), which is called Lighting, that is, the lighting term. The other is the product ofV, fr, cosθ, collectively called lightTransport, which is the light transmission term. First deal with the Lighting term and use it to express it using the spherical harmonic basis function. Li here is the arbitrary function f(x) known above. Then it is obvious that the lighting function that can express the direction i is :

                                                             ​​​L_{i}\approx \sum _{i}l_{i}*B_{i}

        If it is assumed that only the lighting conditions in the scene change (that is, the light source or ambient lighting changes), and the nature of the scene itself (such as the material and position of the object, the coordinates and direction of the camera) does not change, obviously the LightTransPort item can Precalculate it in advance. Since for the diffuse reflection of a point, its radiance spreads evenly around the spherical surface and is not affected by the viewing angle ViewDir, so the BRDF of Diffuse can be used as a constant. At the same time, the weight coefficient of the basis function of the Lighting terml_{i}​ is also a constant. We can pull these two values ​​​​from the integral, and the original formula becomes:

                                         L_{o}=fr_{diffuse}\sum_{i}l_{i} \int_{\Omega}^{} B_i(\omega _{i}) V(\omega _{i}) cos\theta d\omega _{i}

        At this time we can see that there are only lighting basis functions Bi, visibility functions V, and a cosθ inside the integral. We regard V and cosθ as the remaining LightTransPort Treat it as a function and call it F(x). Similarly, it can be expressed by basis functions:

                                                f(x)=V(\omega_{i})cos\theta_{i}=\sum _{j}c_{j}*B_{j}(x)

        The combination of this base function and coefficient is also substituted into the points of LO. Similarly, the weight coefficient can also be proposed, and the following expression forms:

                                                L_{o}=\sum_{i=0}l_{i}\sum_{j=0}c_{j}\int_{\Omega}B_{i}(\omega)B_{j}(\omega)d\omega

        At this time, the internal integration of the points is in line with The orthogonal integrity of the ball harmonious base function The two -ball harmonious base function multiplied by each other Integration on the sphere, if the two basis functions are not on the same level and of the same order, they are 0, and if they are on the same level and of the same order, they are 1. Therefore, the integrals of the spherical harmonic basis functions Bi and Bj can only be 0 and 1, so ​ and the basis function coefficients of LightTransportThe exit radiance Lo is equal to the accumulation of the products of the basis function coefficients of Lightingl_{i}c_{i}

                                                    L_{o_{diffuse}}\approx fr_{diffuse}*\sum_{i}l_{i}*c_{i}

        For each pixel on the object, you only need to save the Vector of coefficients c_{i} (called transfer vector in PRT), which is the same as the coefficient of Lighting in real-time rendering. l_{i}​ Multiply and accumulate and then multiply by the BRDF coefficient of diffuse to get the ambient lighting result of diffuse. Of course, in order to obtain these two weight coefficients, one still needs to calculate the integral once each, that is:

                           l_{i}=\int_{\Omega}L(\omega)*Y_{i}(\omega)d\omega                c_{i}=\int_{\Omega}V(\omega)cos\theta*Y_{i}(\omega)d\omega

       Calculating the Lighting integral light can be directly sampled from the CubeMap, and the θ angle in the calculation of the LightTransPort integral is the angle between the surface normal n and the spherical vector angle ω, so calculating the weight of the LightTransPort term requires two loops, the first traversal The normal of the sphere is normal. Each normal direction traverses the spherical solid angle direction ω (that is, the spherical direction where the CubeMap pixel is located).

        For Glossy, the highlight BRDF is not only related to the incident angleωi, but also the exit angleωo is related, so it cannot be pre-calculated simply by saving a set of coefficients. The emergent radiance of each point is:

                                                        L_{o_{glossy}}\approx \sum_{i}l_{i}*T_{i}(o)

        Where Ti(o) is the LightTransPort of the point. Since this value is related to the two coefficients of the incident and exit, it is a two-dimensional data. A matrix Mp needs to be used to represent it, and it cannot be used like Diffuse. BRDF is proposed outside the integral. For example, the data in the i-th row and j-th column of Mp represents the linear impact of the coefficients of the spherical harmonic basis function corresponding to a certain input and output vector on the result. Therefore, each point in the scene needs to save such a coefficient matrix, and then use this matrix to multiply the coefficient of Lightingl_{i}​ in real-time calculations to obtain the final correct result. Use spherical harmonics to The disadvantage of approximating the Glossy phenomenon is that it consumes resources to save a matrix at each point, and because the properties of spherical harmonic functions are more suitable for describing low-frequency functions, using spherical harmonic basis functions to generate high-frequency images requires a certain amount of order. Levels are required to obtain relatively good approximation results, which further increases the order of magnitude of the matrix, sothe spherical harmonic basis function is not a very good method for approximating the Glossy effect< a i=3>.

        The aboveSloan used spherical harmonics to precompute ambient lighting in the famous article "Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Enviroment" (referred to as PRT) Diffuse and Glossy methods. In engine real-time rendering, we cannot just calculate the product of two coefficients like PRT. The main reason is that LightTransPort needs to obtain the object surface normal in advance, and also needs to calculate the visibility function for concave surfaces, which does not apply to the case where we use spherical harmonics in the engine. In the real-time rendering engine, only the first step in PRT has been completed, which is to use basis functions to reconstruct the image in low frequency. Calculate the effect of Diffuse pre-integration that originally needs to be calculated by convolution. That is, this relationship:

                                                ​​​​​​​        ​​​​​​​        L_{i}\approx \sum _{i}l_{i}*B_{i}​ 

        For CubeMap, the color on the sphere can be abstracted as a function related to the sphere normal. Then the input of the base function is the sphere normal normal (x, y, z), and the ambient lighting The original function Li(x) is the surface color corresponding to the surface normal normal of CubeMap. In general, that is, each pixel of CubeMap is regarded as a set of incident rays, and each incident ray is decomposed into a set of spherical harmonic basis functions.

        What you need to know at this point is that most of the energy of the image is concentrated in the 9 balls corresponding to the first three orders of the spherical harmonics (0,1,2) Harmonic basis functions correspond to real objects. This phenomenon is reflected in the fact that the low-frequency information of objects in space is often greater than the high-frequency information, so lower-order basis functions can restore the original properties of the image. In actual calculations, the focus is on how to calculate the coefficients of these spherical harmonic basis functions. Looking back at the concept of spherical harmonic functions mentioned above, we can see that, < a i=3>The process of projecting to obtain the coefficient sequence requires the integral of each original function F(x) on the spherical surface and the basis function corresponding to this spherical vector, that is:

        ​​​​​​​        ​​​​​​​        ​​​​​​​        ​​​​​​​        ​​​​​​​        c_{i}=\int_{\Omega}L(\omega)Yi(\omega)d\omega=\sum_{j}L_{j}Y_{i}(j)

        And since Li comes from CubeMap, it is a discrete value subject to resolution constraints, soThe color value of each pixel sampled by CubeMap needs to be multiplied by the corresponding value of the pixel The "microelement" area of ​​the solid angle. For the logical derivation of the method for calculating the corresponding solid angle elements of CubeMap corresponding pixels, you can read this blog. Therefore, the above accumulation can be written in detail as:

        ​​​​​​​        ​​​​​​​        ​​​​​​​        c_{i}=\sum_{j}L_{j}Y_{i}(j)=\sum_{\omega}^{\Omega}Y_{i}(\omega)*texCube(\omega)*d\omega

        That is, for CubeMap, traverse each pixel and obtain its corresponding solid angle, sample the CubeMap color at the solid angle and multiply it by its solid angle element, and at the same time calculate its spherical harmonic basis function, multiply and accumulate to obtain the basis function. In this article, we choose to complete all operations in a C# script (mainly because operating CubeMap in Shader is a bit troublesome). For CubeMap, input the images of its six faces, calculate Ci, and then multiply Ci by the corresponding The results of the spherical harmonic basis functions are put into six newly created images and saved in CubeMap.

        And, in order to calculate the basis function, the surface UV of the hexahedron of CubeMap needs to be converted into a spherical vector. We assume that the side length of the square composed of hexahedron is 2, and the axis of the rectangular coordinate system (the coordinate axis is the left-handed coordinate system) Located in the center of the square. When calculating the UV of a certain pixel on a Face, the main axis of the face is fixed, and the value range of the remaining two side lengths is between [-1,1] (so the UV must first be mapped to between [0,1] between, and then mapped to between [-1,1]). For example, the X-axis component of all spherical vectors corresponding to the +X plane is 1, and the Z-axis component of all spherical vectors corresponding to the -Z plane is -1. At the same time, it should be noted that each surface UV is set to the lower left corner as the origin (uv=(0,0)), and naturally the upper right corner is the point where the uv ends (uv=(0) ,0)). (It should be noted that the lower left corner and upper right corner here are viewed from the center of the coordinate axis in these directions). Borrowing pictures from some big guys’ blogs:

        Follow this logic to match the pixels on the six faces of the traversed CubeMap. The code is:

UV2XYZ(int index,Vector2 uv)
    {
        float u = uv.x * (float)2 - 1.0f;
        float v = uv.y * (float)2 - 1.0f;
        switch(index)
        {
            case 0: return new Vector3( 1.0f,  v, -u ); // +x
	        case 1: return new Vector3( -1.0f,  v,  u); // -x
	        case 2: return new Vector3( u,  1.0f, -v ); // +y
	        case 3: return new Vector3( u, -1.0f,  v );	// -y
	        case 4: return new Vector3( u,  v,  1.0f ); // +z
	        case 5: return new Vector3( -u,  v, -1.0f);	// -z
        }
        return Vector3.zero;
    }

         Where index is the serial number of the input face, uv is the UV serial number of the image on the face, and the value range is [0,width] and [0,height]. According to this method, we can obtain the three-dimensional coordinates converted from the two-dimensional UV of each face of the hexahedron. Then we can smoothly traverse all the coordinates of the hexahedron to get the total basis function coefficients:

    Vector3[] evaluateCoefs(int width,int height)
    {
        Vector3[] coefs_m = new Vector3[degree];
        for (int t = 0; t < degree;t++)
        {
            coefs_m[t] = Vector3d.zero;
        }

        for (int k = 0; k < 6; k++)
        {
            for (int j = 0; j < height; j++)
            {
                for (int i = 0; i < width; i++)
                {
                    float px = (float)i + 0.5f;
                    float py = (float)j + 0.5f;
                    float u = 2.0f * (px / (float)width) - 1.0f;
                    float v = 2.0f * (py / (float)height) - 1.0f;
                    //将值的范围压缩到(-1,1)
                    float dx = 1.0f / (float)width;
                    float dy = 1.0f / (float)height;
                    //像素的一个阶

                    float x0 = u - dx;
                    float y0 = v - dx;
                    float x1 = u + dx;
                    float y1 = v + dx;
                    //设置uv周围以width的边界
                    //关于这里的逻辑的解释:https://www.rorydriscoll.com/2012/01/15/cubemap-texel-solid-angle/

                    float da = surfaceArea(x0, y0) - surfaceArea(x0, y1) - surfaceArea(x1, y0) + surfaceArea(x1, y1);

                    v = 1.0f - (float)j / ((float)height - 1.0f);
                    u = (float)i / ((float)width - 1.0f);

                    Vector3 pos = CubeUV2XYZ(k, new Vector2(u, v));

                    Color c = targetTexs[k].GetPixel(i, height-j);

                    Vector3 targetColor = new Vector3(c.r*da, c.g*da, c.b*da);
                    List<double> Y = BasisY(pos);

                    for (int t = 0; t < degree; t++)
                    {
                        coefs_m[t] += Y[t] * targetColor;
                    }
                }
            }
        }
        return coefs_m;
    }

        Due to the characteristics of CubeMap, when pasting the image, I found that the V axis of the image in CubeMap is inverted, so I need to take the opposite direction height-j on the GetPixel function. For the entire CubeMap, the array of spherical harmonic basis function coefficients is only 9, due to its integrally defined properties. After obtaining this coefficient array, multiply the basis function corresponding to the spherical vector of each pixel by the coefficient array according to the serial number, and you can obtain the low-frequency form of the original image:


    Vector3d Render(Vector3 pos,Vector3[] coefs)
    {
        List<double> Y = BasisY(pos);
        Vector3 pixelCol = Vector3.zero;
        for (int i = 0; i < degree; i++)
        {
            pixelCol += Y[i] * coefs[i];
        }
        return pixelCol;
    }

    Texture2D[] RenderCubeMap(int width, int height,Texture2D[] imgs,Vector3[] coefs)
    {
        for (int k = 0; k < imgs.Length;k++)
        {
            for (int i = 0; i < width; i++)
            {
                for (int j = 0; j < height; j++)
                {
                    float v = 1.0f-(float)j / (height - 1.0f);
                    float u = (float)i / (width - 1.0f);
                    Vector3 pos = CubeUV2XYZ(k, new Vector2(u, v));
                    
                    Vector3 col = Render(pos, coefs);
                    imgs[k].SetPixel(i, j, new Color(col.x, col.y, col.z, 1.0f));
                }
            }
        }
        return imgs;
    } 

         Then the six faces are calculated in sequence, each calculation is multiplied by the coefficient and the corresponding basis function, a general function is used to wrap these methods, and then the values ​​of the input six-face image are saved into a CubeMap:


    public void spit2Cube()
    {
        degree = 9;
        int width = targetTexs[0].width;
        int height = targetTexs[0].height;
        Texture2D[] imgs = new Texture2D[6];
        for (int i = 0; i < 6;i++)
        {
            imgs[i] = new Texture2D(width, height, TextureFormat.RGBAFloat, true);
        }

        Vector3d[] coefs = evaluateCoefs(width, height);
        //计算九个球谐基函数的系数
        imgs = RenderCubeMap(width, height, imgs, coefs);
        //将系数乘以对应的基函数的方法

        newCube = new Cubemap(512, TextureFormat.ARGB32, true);
        CubemapFace face = CubemapFace.PositiveX;

        for (int i = 0; i < 6;i++)
        {
            newCube.SetPixels(imgs[i].GetPixels(), face);
            face += 1;
            newCube.Apply();
        }
        string fileName = ".../BRDF/SH&Probe/HaromoneySpherical" + x + ".cubemap";
        AssetDatabase.CreateAsset(newCube, fileName);
        x++;
    }

        Here we simply integrate the method into a function, and then save the calculated result as a CubeMap. The calculation results of the third-order spherical harmonic basis function (i.e. 9 basis functions) are as follows, as shown in the figure. The left side is the result of Smooth=0.33 in Unity's official Standard Material. The right figure is the calculation result of using the third-order spherical harmonic basis function:

         We use the second-order spherical harmonic basis function and compare it with the IrradianceMap (middle) obtained using the pre-integration method above and the CubeMap with Smooth=0.13 in StandardMaterial. It can be seen that the spherical harmonic function is more detailed in restoring the IrradianceMap than the pre-integration method. Be richer:

        So. How to use spherical harmonics in Unity? In the UnityShaderVariables.cginc file of UnityShader, the spherical harmonic basis functions and their coefficients are defined as follows:

    // SH lighting environment
    half4 unity_SHAr;
    half4 unity_SHAg;
    half4 unity_SHAb;
    half4 unity_SHBr;
    half4 unity_SHBg;
    half4 unity_SHBb;
    half4 unity_SHC;

        The value saved here can be understood as the result of multiplying the nine spherical harmonic basis functions in the current scene and the corresponding coefficients (not substituted into the current normal direction). Seven float4s are defined here, a total of 28 numbers. For second-order (three-level) spherical harmonics, there are 9 basis functions. The coefficient corresponding to each basis function is based on (r, The principles of g and b) multiply to exactly 27. For example, taking unity_SHAr as an example, its four components are:

  • unitySHAr.x=(c_{1}^{-1}.r)Y_{1}^{-1}
  • unitySHAr.y=(c_{1}^{0}.g)Y_{1}^{0}
  • unitySHAr.z=(c_{1}^{1}.b)Y_{1}^{1}
  • unitySHAr.w=c_{0}^{0}.r

        Since for the spherical harmonic basis function, the spherical vector of the substituted value can be substituted during calculation, there is no specific spherical vector substituted here, it is just the product of the previous formula and the coefficient of the spherical harmonic basis function. In the definition in the Lighting.hlsl file, there is the following method of sampling three-layer spherical harmonic basis functions:

这个函数定义在Lighting.hlsl中
// Samples SH L0, L1 and L2 terms
half3 SampleSH(half3 normalWS)
{
    // LPPV is not supported in Ligthweight Pipeline
    real4 SHCoefficients[7];
    SHCoefficients[0] = unity_SHAr;
    SHCoefficients[1] = unity_SHAg;
    SHCoefficients[2] = unity_SHAb;
    SHCoefficients[3] = unity_SHBr;
    SHCoefficients[4] = unity_SHBg;
    SHCoefficients[5] = unity_SHBb;
    SHCoefficients[6] = unity_SHC;

    return max(half3(0, 0, 0), SampleSH9(SHCoefficients, normalWS));
}

以下的函数定义在EntityLighting.hlsl中
float3 SampleSH9(float4 SHCoefficients[7], float3 N)
{
    float4 shAr = SHCoefficients[0];
    float4 shAg = SHCoefficients[1];
    float4 shAb = SHCoefficients[2];
    float4 shBr = SHCoefficients[3];
    float4 shBg = SHCoefficients[4];
    float4 shBb = SHCoefficients[5];
    float4 shCr = SHCoefficients[6];

    // Linear + constant polynomial terms
    float3 res = SHEvalLinearL0L1(N, shAr, shAg, shAb);

    // Quadratic polynomials
    res += SHEvalLinearL2(N, shBr, shBg, shBb, shCr);

#ifdef UNITY_COLORSPACE_GAMMA
    res = LinearToSRGB(res);
#endif

    return res;
}

// Ref: "Efficient Evaluation of Irradiance Environment Maps" from ShaderX 2
real3 SHEvalLinearL0L1(real3 N, real4 shAr, real4 shAg, real4 shAb)
{
    real4 vA = real4(N, 1.0);

    real3 x1;
    // Linear (L1) + constant (L0) polynomial terms
    x1.r = dot(shAr, vA);
    x1.g = dot(shAg, vA);
    x1.b = dot(shAb, vA);

    return x1;
}

real3 SHEvalLinearL2(real3 N, real4 shBr, real4 shBg, real4 shBb, real4 shC)
{
    real3 x2;
    // 4 of the quadratic (L2) polynomials
    real4 vB = N.xyzz * N.yzzx;
    x2.r = dot(shBr, vB);
    x2.g = dot(shBg, vB);
    x2.b = dot(shBb, vB);

    // Final (5th) quadratic (L2) polynomial
    real vC = N.x * N.x - N.y * N.y;
    real3 x3 = shC.rgb * vC;

    return x2 + x3;
}

         SampleSH is very simple. It obtains the values ​​​​of seven stored spherical harmonic basis functions, and then passes them into SampleSH9 together with the input normal N. SampleSH9 first obtains these values, and then combines the first two orders and the third order. The third order is calculated separately (because the third order designs some four arithmetic operations between the components of N). After calculation, perform color gamut operations based on whether Gamma correction is required. When we actually obtain the IrradianceMap image of the spherical harmonic function approximation in the Shader, we use the ShadeSH9 function in UnityCG.cginc. Its actual parameter is the world space normal vector of the current object. The internal logic is as follows:

half3 ShadeSH9 (half4 normal)
{
    // Linear + constant polynomial terms
    half3 res = SHEvalLinearL0L1 (normal);

    // Quadratic polynomials
    res += SHEvalLinearL2 (normal);

#   ifdef UNITY_COLORSPACE_GAMMA
        res = LinearToGammaSpace (res);
#   endif

    return res;
}

 references:

Spherical harmonic function part:

https://zhuanlan.zhihu.com/p/452190320

https://blog.csdn.net/qq_33999892/article/details/83862583

Spherical harmonic lighting and PRT study notes (3): Spherical harmonic function - Zhihu

https://en.wikipedia.org/wiki/Associated_Legendre_polynomials

[Paper Reappearance] Spherical Harmonic Lighting: The Gritty Details - Zhihu

[Unity]IBL-Diffuse reflection using precomputed irradiance map - Zhihu

Real-time rendering|Precomputation-Based Rendering: PRT part - Zhihu

GAMES202-High-quality real-time rendering_bilibili_bilibili

Spherical harmonic illumination - spherical harmonic function - Zhihu

http://www.ppsloan.org/publications/StupidSH36.pdf

Unity-Shader 05 Spherical Harmonic Function and Rendering Path - Zhihu

Basics of Graphics | Spherical Harmonics Lighting_Sanglai93's Blog-CSDN Blog_Spherical Harmonics Lighting

IBL part:

This is the key:LearnOpenGL - Diffuse irradiance

Physically based ambient light rendering 1 - Zhihu

Game engine programming practice (5) - PBR image-based lighting (IBL) implementation - Zhihu

In-depth understanding of PBR/Image-Based Lighting (IBL) - Zhihu

SIGGRAPH 2013 Course: Physically Based Shading in Theory and Practice

Guess you like

Origin blog.csdn.net/qq_38601621/article/details/127026409