Hundred Talents Program (Program)

Chapter 1 Solid foundation

1. Rendering pipeline (job)

  • Application stage (CPU)

        1. Prepare basic data

                ① Scene object data (object transformation data: position, rotation, scaling, object grid data: vertex position, uv map, etc.)

                ②Camera data (position, orientation, far and near clipping plane, orthographic/perspective, viewport ratio/size, etc.)

                ③Light source and shadow data (light source type: directional light/point light/spotlight, etc., parameters such as color, position, direction, angle; whether shadow and shadow parameters are required)

                ④ Other global data, etc.

        2. Acceleration algorithm, coarse-grained culling

                Collision detection, acceleration algorithm,

Occlusion culling: Objects that are completely occluded by opaque objects are culled                 by judging the object position and occlusion relationship .

        3. Set the rendering state

                Drawing settings (different objects use different shaders, different selection objects use batch mode),

                Drawing order (rendering order relative to camera distance/RenderQueue/UICanvas, etc.),

                render target (FrameBuffer/RenderTexture),

                Rendering mode (forward rendering/deferred rendering)

        4. Call DrawCall to output rendering primitives to video memory

                vertex data, other data

  • Geometry Stage (GPU)

        1. Vertex shader (programmable): model space → world space → camera space → clipping space Transform the coordinates of vertices through the MVP matrix

        2. Tessellation (optional)

        3. Geometry shader (optional)

        4. Vertex clipping (CVV frustum/front or back culling)

        5. Screen mapping

  • Rasterization stage (GPU)

        1. Triangle setup

        2. Triangle traversal

        3. Fragment shader

        4. Fragment-by-slice operation

                Fragment coloring, color mixing (Alpha Test, Depth Buffer Test, Stencil Test, Blending), target buffer (FrameBuffer, RenderTexture)

  • Post-processing

 2. Mathematical basis

  • vector operation

        1. Vector : refers to a directed line segment in n-dimensional space that includes magnitude  and direction .

        2. Vector operation :

                     ① Multiplication/division of vector and scalar : k v =(k v_{x},k v_{y},k v_{z}) /  

                     ② Addition and subtraction of vectors:

                     ③ Modulo of a vector: The modulus of a vector is a scalar, which can be understood as the length of the vector in space.

                     ④Unit vector: Unit vector refers to those vectors whose modulus is 1. For any given non-zero vector , the process of converting it into a unit vector is called normalization . A zero vector (that is, each component value of the vector is 0, such as v = (0,0,0) cannot be normalized.

                    ⑤ vector dot product (dot product):

                        Significance : Now there is a light source, the light emitted by it is perpendicular to \thing{a}the direction, then the projection of b in \thing{a}the direction is the shadow in the direction of b .                                                             \thing{a}

                       ·  Properties: (1) The dot product can be combined with scalar multiplication. (k a ) b = a ( k b )=k( a b )

                                     (2) The dot product can be combined with vector addition and subtraction. a · ( b + c ) = a · b + a · c

                                     (3) The result of the dot product of a vector and itself is the square of the modulus of the vector.

                    ⑥Vector cross product (cross product):

                       Significance  : The result of cross-producting two vectors will get a new vector that is perpendicular to the two vectors at the same time.

  • Matrix Operations

        1. Matrix : It is a rectangular array composed of mXn scalars.

        2.矩阵运算

                ①矩阵和标量的乘法:

                ②矩阵和矩阵的乘法:不满足交换律/满足结合律

        3.特殊矩阵

                ①方块矩阵/方阵

                ②单位矩阵

                ③转置矩阵

                        · 性质一:矩阵转置的转置等于原矩阵。 (M^{T})^{T}=M

                        · 性质二:矩阵串接的转置,等于反向串接各个矩阵的转置。 (AB)^{T}=B^{T}A^{T}

                 ④逆矩阵:

                         MM^{-1}=M^{-1}M=I (并非所有的方阵都有对应的逆矩阵。)

                        · 性质一:逆矩阵的逆矩阵是原矩阵本身。 (M^{-1})^{-1}=M

                        · 性质二:单位矩阵的逆矩阵是它本身。     I^{-1}=I                           

                        · 性质三:转置矩阵的逆矩阵是逆矩阵的转置。(M^{T})^{-1}=(M^{-1})^{T}

                        · 性质四:矩阵串接相乘后逆矩阵等于反向串接各个矩阵的逆矩阵。(AB)^{-1}=B^{-1}A^{-1}

                ⑤正交矩阵:

                        MM^{T}=M^{T}M=I  / M^{T}=M^{-1}

        4.矩阵的几何意义:变换

                线性变换包括:缩放、旋转、错切、镜像、正交投影

                ①平移矩阵:

                                                 

                ②缩放矩阵:

                                        

                ③旋转矩阵:

                                      

                                                            

                ④复合变换:

                                P_{new}=M_{translation}M_{rotation}M_{scale}P_{old}

                             (由于上面我们使用的是列矩阵, 因此阅读顺序是从右到左。)

        5.左乘右乘

        The position of the argument directly affects the resulting value. Usually when transforming vertices, we use the right multiplication method to multiply by column matrix. This is because the built-in matrices provided by Unity (such as UNITY_MATRIX_MVP, etc.) are stored in columns. But sometimes, we also use the left multiplication method, because the operation of transposing the matrix can be omitted .

  • MVP matrix operation                

        1. M: model space → world space

        2. V: world space → observation space (with the camera as the origin)

        3. P: view space → clip space

3. Texture Introduction

  • texture overview

        1. The concept of texture: a structured storage form that can be read and written by shaders.

        2. Reasons for using texture: ① Sacrifice geometric details → reduce modeling workload ② Reduce storage space ③ Improve reading speed

  • texture pipeline

Model space position→ projection function →texture mapping→texture coordinates→ communication function →new texture coordinates→ texture sampling →texture value (extended UV) (avoid relying on texture reading)

        1. Projection function : Project the surface coordinates to the texture coordinate space, usually a two-dimensional coordinate (u, v), and the projection function projects the three-dimensional coordinates to the two-dimensional UV coordinates. (Generally, uv is basically processed during modeling and stored in vertices )

            

        2. Communication function : stipulate that only part of the texture can be used for display, or perform some matrix changes to apply the texture arbitrarily; define the application method of the texture, for example, when uv exceeds the range of 0-1, it is called Wrapping Mode in OpenGL, and Texture Addressing Mode in DX.

                ①Wrap (DX)/Repeat (GL) or tile: Discard the integer part and only use the decimal part.

                ②Mirror: Reverse when it exceeds 1, and reverse when it exceeds 2.

                ③Clamp: If it exceeds 1, it is 1.

                ④Border : When it exceeds 1, it is rendered with a defined edge color.

  •  Texture Sampling Settings - Filter Mode Texture Filtering (Job)

        1. Filter Mode: Filtering settings. When the size of the square area on the screen is similar to the pixel size of the picture, the entire picture is basically restored. But when the area is large, magnification will occur, and vice versa, it will be minification.

        2. Magnification zoom in:

               ①nearest neighbor (nearest neighbor): Only use the pixel value closest to the center of the texture after zooming in, and there will be a blocky (pixelated) effect.

                ②bilinear interpolation (bilinear interpolation): For each pixel, access the surrounding four adjacent pixels and then mix them. The effect is blurrier, but a lot less jagged.

                ③Smooth curve interpolation of quilez: Use a smooth curve to interpolate 2x2 texel groups. The two common ones are the smoothstep curve and the quintic curve.

                ④cubic convolution (three convolution interpolation): relatively expensive, sacrifice performance to improve performance.

        3. Minification:

                 ①Nearest neighbor/bilinear interpolation: color loss and flickering.

                ②Mipmap: By preprocessing the texture and creating a data structure, it helps to quickly calculate the approximate value of a group of textures on a pixel in real-time calculations, and use 2x2 4 adjacent texture values ​​as the new texture value of the next level, so the texture of the new level is 1/4 the size of the previous level. Up to 1x1, so 1/3 more storage space.

                Disadvantages: For example, the whole will be blurry, especially when the number of texels contained in a pixel along the u direction and the v direction is very different (such as observing an object from the edge).

                Solution: Anisotropic filtering (Ripmap)

  •  GPU rendering optimization method (job)

        1. Texture Atlas/Array (lower DrawCall)

        2. Texture compression: The process of decompressing resources in the CPU is reduced, the package size is reduced, the data magnitude is reduced, the pressure of bandwidth calculation is reduced, and the memory usage efficiency is higher.

  • Cubemap Cubemap

         Consists of six cube textures, each representing a face of the cube. Cube objects are accessed by a 3D texture coordinate representing the direction from the center of the cube pointing outward.

  •  Bump Mapping

        Instead of using textures to change the color component in the lighting equation, textures are used to modify surface normals. The geometric normals of the surface remain the same, only the normals used in the lighting equations are modified.

  • Displacement Mapping

       actually moved the position of the vertices

Supplement: https://blog.csdn.net/zheliku/article/details/124464638

Chapter 2 Lighting Basics

1. Color space

  • color sender

        1. Overview of light :

                ①Light source (an object that produces light)

                ②Light (essentially a flow of photons in a specific frequency band) wavelength of visible light: (ultraviolet) 400-700 (infrared)

        2. Spectrophotometer : A scientific instrument that decomposes light with complex components into spectral lines.

                The commonly used wavelength ranges are: (1) 200~380nm ultraviolet region, (2) 380~780nm visible region (work) , (3) 2.5~25μm (4000cm<-1>~400cm<-1> in terms of wave number) infrared region.

        3. Light propagation : direct light, refracted light, reflected light, ray tracing

  • light receiver

        1. Relative brightness perception

        2. Human eye HDR : The human eye can not only distinguish the difference between different layers of high-brightness clouds, but also distinguish the similarities and differences of different objects in the shadow. But the capabilities of the human eye cannot guarantee that both functions will be effective at the same time.

        3. Distribution of photoreceptor cells in the human eye

  • Definition of Color Spaces (Assignment)

        1. Color gamut (the coordinates of the three primary colors, which can form a triangle)

        2. Gamma (how to segment the triangle)

        3. White point (center of gamut triangle)

  • Common color models and color spaces

        1. Color model: A method of describing (arranging) colors using certain rules. Example: RGB, CMYK, LAB

        2. Color space: At least three indicators need to be met: color gamut, white point, and Gamma. Example: CIE XYZ, Adobe RGB, sRGB, etc.

2. Model and material basis 

  • graphics rendering pipeline

The realization principle of the model: point → line → surface → model        

  • UV  

        1. Unwrap the UV in the modeling software, and the UV will be placed in a two-dimensional coordinate system with U in the horizontal direction and V in the vertical direction, and the range is (0-1). Unwrapped UVs draw textures in sp.

        2. Information contained in the model (obj): ① Vertex coordinate data (XYZ coordinates of a single vertex in the model space) ② Texture coordinates (U in the horizontal direction and V in the vertical direction) ③ Vertex normal ④ Vertex color (RGBA channel color information of a single vertex)

  • Material Basics

        1. Diffuse reflection: Diffuse reflection is the easiest model to simulate. The simplest Lambert model assumes that light is reflected uniformly.

        2. Specular reflection : Reflect the incident light according to the surface normal, and only have energy in the reflection direction, and the energy in other directions is 0.

        3. Refraction : For some substances, in addition to reflection, part of the light will be refracted into the object according to the refractive index of the object, and the amount of reflected and refracted energy is determined by Fresnel's law.

        4. Rough specular reflection : The normal deviation is small, and the reflection is still concentrated in one area, forming a frosted texture.

        5. Rough mirror refraction

        6. Multi-layer material : There are transparent substances attached to the surface of the material.

        7. Subsurface scattering : Translucent objects (jade, candles, milk, skin, etc.) light is reflected multiple times inside the object.

  • The role of model data in rendering

        1. Vertex animation: In the vertex shader, modify the vertex position of the model to make the model move. (Vertex animation requires a certain number of vertices for the effect to be more obvious.)

        2. Texture animation: In the fragment shader, modify the UV information of the model so that when the texture is sampled, the UV will be displaced to produce a motion effect.

        3. Vertex color: When rendering, you can control the color range.

        4. Vertex normal and surface normal: the storage methods are different. Vertex normal: each vertex has a normal, and the interpolation results are different; surface normal: when smoothing is not used, three vertices share one normal, and the interpolation results are the same.

  • expand

  • Operation

1. The role of vertex color:

 Source: Other functions of vertex color - Zhihu (zhihu.com)

2. The effect of the model smoothing group on the normal:

        ① First figure out what the smooth group is

  • No real smooth faces, all faces are triangles
  • The meaning of the smoothing group : the figure below shows the brightness of the surface, which is purely an analogy, not an exact number. The transition between the two surfaces is the average of the brightness sum of the two surfaces. The smoothing group processes the lighting information between the surfaces and improves their brightness and saturation.

    • If one smoothing group of two faces is 1 and the other is 2, no calculation is performed
    • If their smoothing groups are all 1, lighting calculations will be performed to produce a smooth effect that affects the final rendering.

  • Automatic smoothing: all surfaces with angles less than 45 degrees are smoothed
  • Smoothing group : achieve a smooth effect by processing the lighting information between surfaces, which is used to set the smooth display of edge lines.
  • Mesh smoothing and turbo smoothing : express curvature by adding faces and dividing them into finer details
  • We usually say that the wiring is reasonable, and the topology is actually to maintain the consistency of the two triangular faces (the two triangular faces that form a four-sided face)

②Influence of smoothing group on normal

normal

  • The meaning of baking normal is to use a picture (RGB) to store the normal direction of the high model on the surface of the low model. Paste the low model of the normal map, and it will visually produce uneven and detailed rendering effects, so that it looks like a high model. Normal Mapping The normal map is essentially a kind of picture, but the purpose of this picture is quite special.
  • Without a smoothing group, the baked normal map is edge-to-edge. In general, at least one smoothing group should be given
  • Refer to the example in the link: the normal map of the gradient color will have black edges in substancepainter (problem with the smooth group)

Smoothing group (soft and hard edges) and UV influence on normals

  • For models connected by smooth groups, the normal map has a large gradient color, which causes the normal effect of the model to be very strange (there are dark and bright lights and shadows on the plane). When you find this kind of gradient in your model, there must be a problem with the smoothing group.
  • The two models in the middle show seams to varying degrees (the third model has a very pronounced seam, the second a little weaker). The smoothing group and uv are connected or disconnected uniformly, and there will be no obvious seams. When encountering seam problems, give priority to whether the smoothing group and uv of the model are unified.

Source:  Model and Material Basics - Zhihu (zhihu.com)

 3. Introduction to common functions of HLSL

  • basic math operations

  • exponentiation pair function

  • data range class

  • type judgment class

  • Trigonometric and hyperbolic functions

  •  Vector and Matrix classes

  •  Light calculation class

  •  texture lookup class

 

 

  • Operation 

The actual use test of ddx and ddy:

1. Simple edge highlight application:

Shader "Unlit/27_ddxddy"
{
    Properties
    {
        [KeywordEnum(IncreaseEdgeAdj, BrightEdgeAdj)] _EADJ("Edge Adj type", Float) = 0
        _MainTex ("Texture", 2D) = "white" {}
        _Intensity("Intensity",range(0,20)) = 2
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100
        Cull Off
        Blend SrcAlpha OneMinusSrcAlpha

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile _EADJ_INCREASEEDGEADJ _EADJ_BRIGHTEDGEADJ

            #include "UnityCG.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;
            float _Intensity;

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                return o;
            }

            fixed4 frag (v2f i , float f : VFACE) : SV_Target //在片断着色器中还有些特殊的语义的可以识别,比如VFACE,如果渲染表面朝向摄像机,则Face节点输出正值1,如果远离摄像机,则输出负值-1。
            {
                fixed a = 1;
                if ( f < 0 ) a = 0.5;
                fixed3 col = tex2D(_MainTex, i.uv).rgb;
                #if _EADJ_INCREASEEDGEADJ //边缘调整:增加边缘差异调整
                col += (ddx(col)+ddy(col))*_Intensity;//使边缘的像素亮度差异变大,增加边缘突出
                #else //边缘调整:增加边缘亮度调整
                col += (abs(ddx(col))+abs(ddy(col)))*_Intensity;//即fwidth(col),边缘变亮
                // fwidth func in HLSL: https://docs.microsoft.com/zh-cn/windows/desktop/direct3dhlsl/dx-graphics-hlsl-fwidth
                #endif
                return fixed4(col,a);
            }
            ENDCG
        }
    }
}

Increase: 

Bright:

2. Height generating normal application(?):

Shader "Unlit/28_DDX&HeightMap"
{
    Properties
    {
        [KeywordEnum(LMRTMB,CMRCML,NAVDDXPOSDDY)] _S ("Sample Type", Float) = 0
        _Color("Main Color", Color) = (1,1,1,1)
        _MainTex ("Texture", 2D) = "white" {}
        _HightMap("Hight Map", 2D) = "white" {}
        _Intensity("Intensity", Range(0, 20)) = 5
        _SpecuarlIntensity("Specular Intensity", Range(0, 100)) = 80
        _SpecuarlStrengthen("Specular Strengthen", Range(0, 1)) = 0.5
    }
    SubShader
    {
        Tags { "RenderType"="Transparent" }
        LOD 100

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile _S_LMRTMB _S_CMRCML _S_NAVDDXPOSDDY

            #include "UnityCG.cginc"
            #include "Lighting.cginc"

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                float3 normal : NORMAL;
                float4 tangent : TANGENT;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
                float3 lightDir : TEXCOORD1;
                float3 viewDir : TEXCOORD2;
                float3 normal : TEXCOORD3;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;
            fixed4 _Color;
            sampler2D _HightMap;
            float4 _HightMap_TexelSize; // 1/w, 1/h, w, h
            float _Intensity;
            float _SpecuarlIntensity;
            float _SpecuarlStrengthen;

            inline float3x3 getTBN (inout float3 normal, float4 tangent) {
				float3 wNormal = UnityObjectToWorldNormal(normal);		   
				float3 wTangent = UnityObjectToWorldDir(tangent.xyz);		
				float3 wBitangent = normalize(cross(wNormal, wTangent));	
                normal = wNormal;
				return float3x3(wTangent, wBitangent, wNormal);			    
            }
            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                float3x3 tbn = getTBN(v.normal, v.tangent);
                o.lightDir = mul(tbn, normalize(_WorldSpaceLightPos0.xyz));
                o.viewDir = mul(tbn, normalize(_WorldSpaceCameraPos.xyz - mul(unity_ObjectToWorld, v.vertex))); 
                o.normal = mul(tbn, v.normal);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // sample the texture
                fixed4 c = tex2D(_MainTex, i.uv);
                #if _S_LMRTMB
                //参考来源:https://blog.csdn.net/puppet_master/article/details/53591167
                float offsetU = tex2D(_HightMap, i.uv + _HightMap_TexelSize * float2(-1, 0)).r - tex2D(_HightMap, i.uv + _HightMap_TexelSize * float2(1, 0)).r;
                float offsetV = tex2D(_HightMap, i.uv + _HightMap_TexelSize * float2(0, 1)).r - tex2D(_HightMap, i.uv + _HightMap_TexelSize * float2(0, -1)).r;
                #elif _S_CMRCML
                fixed cr = tex2D(_HightMap, i.uv).r;
                float offsetU = (cr - tex2D(_HightMap, i.uv + _HightMap_TexelSize * float2(1, 0)).r) * _Intensity;
                float offsetV = (cr - tex2D(_HightMap, i.uv + _HightMap_TexelSize * float2(0, -1)).r) * _Intensity;
                #else
                fixed h = tex2D(_HightMap, i.uv).r;
                float offsetU = -ddx(h); // 右边像素采样 - 当前像素采样 = U的斜率,这里我们取反向,因为我们需要的是当前-右边的值,而ddx是固定的right-cur,所以我们只能取反
                float offsetV = ddy(h); // 下边像素采样 - 当前像素采样 = V的斜率,这里我们不用取反向,斜率方向刚刚好是我们需要的
                #endif // end _S_LMRTMB
                 // 调整tangent space normal
                float3 n = normalize(i.normal.xyz + float3(offsetU, offsetV, 0) * _Intensity);
                // 为了测试法线,添加了diffuse与specular的光照因数
                // diffuse
                float ldn = dot(i.lightDir, n) * 0.5 + 0.5;
                fixed3 diffuse = _LightColor0.rgb * _Color * ldn * c.rgb * tex2D(_MainTex, i.uv);
                // specular
                float3 halfAngle = normalize(i.lightDir + i.viewDir);
                float3 hdn = max(0, dot(halfAngle, n));
                fixed3 specular = _LightColor0.rgb * _Color * pow(hdn, 100 - _SpecuarlIntensity) * _SpecuarlStrengthen;
                fixed3 combined = diffuse + specular;
                return fixed4(combined, 1);
            }
            ENDCG
        }
    }
}

         There are three algorithm methods, the first two are basically the same, and the sampling coordinates are not the same, and the last one is processed by partial derivative functions DDX and DDY.

 The interpolation of the gray values ​​of adjacent pixels in the height map is used as the height slope.

3. Flat shading application:

Source:  Unity Shader - ddx/ddy partial derivative function test, implementation: sharpening, height map, Flat shading application, height generation normal_Jave.Lin's blog-CSDN blog_unity ddx ddy

4. Traditional empirical lighting model

  • lighting model

        1. Concept : It is used to calculate the light intensity (color value) at a certain point of an object. From the theoretical basis of algorithms, lighting models are divided into two categories: one is based on physical theory, and the other is based on empirical models.

        2. Lighting model based on physical theory (PBR) : It focuses on the use of physical measurement and statistical methods, the effect is real, but the calculation is complicated.

        3. Empirical model : A simulation of lighting. Through practice, a simplified method is summarized, which simplifies the real lighting calculation and achieves the desired effect.

  • local lighting model

        1. Concept : Only the direct illumination part is calculated, that is, the light directly emitted from the light source and irradiated on the surface of the object and reflected to the camera once .

        2. Local lighting = diffuse reflection + high light reflection + ambient light + self-illumination

        ① Diffuse reflection

       Definition: In the definition of the lighting model, when the light is irradiated from the light source to the surface of the model, the light is evenly reflected in all directions; in the process of diffuse reflection, the light is absorbed and scattered, thus changing the color and direction.

        Calculation: Diffuse lighting conforms to Lambert's law, and the reflected light intensity is proportional to the cosine of the angle between the normal and the direction of the light source. dot(n,l)= \left | n \right | \left | l \right | cos\theta=1*1* cos\theta=cos\theta 

        ② Specular reflection (specular reflection)

        Definition: When the light reaches the surface of the object and is reflected, the specular reflection can be observed when the line of sight is near the reflected light. (Light intensity remains the same, direction changes)

         ③Ambient light

         ④ Self-illumination

        The light emitted by the object itself is usually added to the lighting model as a separate item. Generally, a luminous map is used to describe the self-illumination of an object.

  • Classic Lighting Model

        1. Lambert model

        2. Phong model

        3. Blinn-Phong model

        *Difference between Phong model and Blinn-Phong model

        4. Gourand model

       

        5. Flat model

  • Summarize

  • Operation

1. The role of the concept of energy conservation in the basic lighting model.

The energy of the outgoing ray does not exceed the energy of the incoming ray.

2. Write a complete lighting model based on the concept of energy conservation. Contains ambient lighting.

Zhuang understands technical art entry notes_Xianying's Blog-CSDN Blog

Five, Bump Mapping bump mapping

  • Introduction to Bump Mapping

        1. Purpose : Use a texture to modify the normal of the model surface to provide more details for the model. (does not change the vertex position of the model)

        2. Classification :

        ①Height mapping : Use a height map to simulate surface displacement (displacement), and get a modified normal value.

        ②Normal mapping : Use a normal map to directly store surface normals.

  • Normal Mapping Normal Mapping

        1. Principle : Use a texture that stores the normal information of the local surface of the object. When calculating the illumination, the program will read the normal map and perform illumination calculation.

        2. TBN matrix:

         3. The advantages of storing normal information in tangent space :

        ① High degree of freedom (in tangent space is the relative normal information, which is the disturbance to the normal of the current object)

        ② You can realize uv animation by moving uv coordinates

        ③Because the information recorded by the texture is the perturbation of the normal of the object, the texture map can be shared

        ④ The z direction of the normal in the texture in the tangent space is always positive (it can be negative in the model space), and only xy (tangent and subtangent) can be stored

      4. Compression format of normal map in unity :

        On non-mobile platforms, Unity will convert the normal map to DXRT5nm format (only two effective channels GA channels, saving space); on mobile platforms, Unity uses traditional RGB channels.

Decode normal map:

            v2f vert (a2v v)
            {
                v2f o;
                o.posCS = UnityObjectToClipPos(v.vertex);
                o.posWS = mul(unity_ObjectToWorld, v.vertex);
                o.normalDirWS = normalize(UnityObjectToWorldNormal(v.normal));
                o.tangentDirWS = normalize(mul(unity_ObjectToWorld, v.tangent).xyz);  //切线
                o.bitangentDirWS = normalize(cross(o.normalDirWS,o.tangentDirWS) * v.tangent.w);  //副切线
                o.uv0 = v.texcoords;
            
                return o;
            }
            
            float4 frag (v2f i) : SV_Target
            {
                // 准备向量
                half3x3 TBN = half3x3(i.tangentDirWS, i.bitangentDirWS, i.normalDirWS);

                half3 viewDirWS = normalize(_WorldSpaceCameraPos.xyz - i.posWS.xyz);
                half3 viewDirTS = normalize(mul(TBN, viewDirWS));

                half3 normalDirTS = UnpackNormal(tex2D(_NormalTex, pm_uv)).xyz;
                half3 normalDirWS = normalize(lerp(i.normalDirWS,mul(normalDirTS, TBN),_NormalScale));
            }

  • Parallax Mapping Parallax Mapping

        1. Concept :

        A technique for improving the surface detail of a model (changing texture coordinates) and giving it an occlusion relationship, and can be used with normal maps to provide convincingly realistic effects. New textures need to be introduced, heightmaps (using model surface height information to offset textures), but a lot of triangles are required to get good results.

         2. Principle :

       Transfer the sight vector v to the tangent, then the x and y components of v' calculated above will be aligned with the tangent and vicetangent of the surface, so that the surface can be rotated no matter what, otherwise once the surface is rotated arbitrarily, it is difficult to point out where the x and y of v' are.   

        viewDir.xy / viewDir.z, divided by the z component, is done to obtain a larger v' when the line of sight vector is roughly parallel to the surface. At this time, z is close to 0, so the x and y components of v' will be relatively large. But some people do not divide by the z component, because it will not look good at certain angles, so they do not divide. If not, this technique is called Parallax Mapping with Offset Limiting .

        Calculate the line-of-sight offset distance based on the sampled depth value. The deeper the depth, the farther the sampling distance is, and the accuracy is poorer.

Understanding: Implementation of Parallax Mapping and Relief Mapping in Unity - Short Book (jianshu.com)

// 视差映射
            float2 ParallaxMapping(float2 texcoords, float3 viewDirTS)
            {
                // 高度采样
                float height = tex2D(_HeightTex, texcoords).r;

                // 视点方向越接近法线 UV偏移越小
                float2 offuv = viewDirTS.xy / viewDirTS.z * height * _HeightScale;

                return texcoords - offuv;
            }

  • Steep Parallax Mapping

         Layering mechanism, with many layers, the performance overhead will be high (the number of sampling layers is limited according to the angle of view v and normal n); if the layers are small, rendering jaggedness will be obvious.

        Start sampling from point A, calculate the height of the line of sight at this point and the height obtained by sampling, if the depth of the line of sight is less than the depth of the sampling height map, then sample the next point, that is, find the uv coordinate of the intersection point of the line of sight and the surface of the object, and then replace the original uv of point a.

// 陡峭视差映射
            float2 SPM(float2 texCoords, float3 viewDirTS)
            {
                // 高度层数  
                /* trick--分层次数由视点方向与法向夹角来决定,当视点的方向和法线方向越靠近时,
                那么采样需要的偏离也越小,那么就可以采用较少的高度分层,反之则需要更多的分层。*/
                float minLayers = 20;
                float maxLayers = 100;
                float numLayers = lerp(maxLayers, minLayers, abs(dot(float3(0.0,0.0,1.0), viewDirTS)));
                // 每层高度
                float layerHeight = 1.0 / numLayers;
                // 当前层级高度
                float currentLayerHeight = 0.0;
                // 视点方向偏移量
                float2 offsetuv = viewDirTS.xy / viewDirTS.z * _HeightScale;
                // 每层高度偏移量
                float2 deltaTexCoords = offsetuv / numLayers;
                // 当前 UV
                float2 currentTexcoords = texCoords;
                // 当前UV位置高度图的值
                float currentHeightMapValue = tex2D(_HeightTex, currentTexcoords).r;

                while(currentLayerHeight < currentHeightMapValue)
                {
                    // 按高度层级进行UV偏移
                    currentTexcoords += deltaTexCoords;
                    // 从高度贴图采样获取当前UV位置的高度
                    currentHeightMapValue = tex2Dlod(_HeightTex, float4(currentTexcoords, 0, 0)).r;
                    // 采样点高度
                    currentLayerHeight += layerHeight;
                }

                return currentTexcoords;
            }

  • Parallax Occlusion Mapping

        The algorithm for the parallax occlusion map is the same as the steep parallax map, but interpolated on top of it . After using the steep parallax map to get the two points closest to the intersection point, the interpolation is performed according to the difference between the depth of the two and the depth of the corresponding layer.

// 视差遮蔽映射
                // 前一个采样点
                float2 preTexcoords = currentTexcoords - deltaTexCoords;

                // 线性插值
                float afterHeight = currentHeightMapValue - currentLayerHeight;
                float beforeHeight = tex2D(_HeightTex, preTexcoords).r - (currentLayerHeight - layerHeight);
                float weight = afterHeight / (afterHeight - beforeHeight);
                float2 finalTexcoords = preTexcoords * weight + currentTexcoords * (1.0 - weight);

                return finalTexcoords;
            

  • Relief Mapping Relief Mapping

        1. Concept:

         2. Advantages:

        With larger UV offsets, parallax mapping can cause distortion. Relief mapping is easier to provide more depth, and can also do self-shadowing and occlusion effects.

float2 ReliefMapping(float2 texCoords, float3 viewDirTS){
                float3 startPoint = float3(texCoords,0);
                float h = tex2D(_HeightTex, texCoords).r;
                viewDirTS.xy *= _HeightScale;
                int linearStep = 40;
                int binarySearch = 8;
                float3 offset = (viewDirTS/viewDirTS.z)/linearStep;
                for(int index=0;index<linearStep;index++){
                    float depth = 1 - h;
                    if (startPoint.z < depth){
                        startPoint += offset; 
                    }
                }
                float3 biOffset = offset;
                for (int index=0;index<binarySearch;index++){
                    biOffset = biOffset / 2;
                    float depth = 1 - h;
                    if (startPoint.z < depth){
                        startPoint += biOffset;
                    }else{
                        startPoint -= biOffset;
                    }
                }
                float2 finalTexCoords = startPoint.xy;

                return finalTexCoords;
            }

  • Total code:

        Reference source:

Implementation of Parallax Mapping and Relief Mapping in Unity- Short Book (jianshu.com)

2.5 Improvement of Bump Map (yuque.com)

Shader "Unlit/29_SnowStone"
{
    Properties
    {
         [KeywordEnum(PM,SPM,CSPM,RM)] _S ("Sample Type", Float) = 0
        _MainTex ("基本颜色贴图", 2D) = "white" {}
        _NormalTex ("法线贴图", 2D) = "bump" {}
        _HeightTex ("高度贴图", 2D) = "white" {}
        _Cubemap ("环境贴图", 2D) = "_Skybox" {}


        _MainCol ("基本色", color) = (0.5,0.5,0.5,1.0)
        _Diffuse ("漫反射颜色", color) = (1,1,1,1)
        _Specular ("高光颜色", color) = (1,1,1,1)

        _Gloss ("光泽度", range(1,255)) = 50
        _HeightScale ("高度图扰动强度", range(0,0.15)) = 0.5
        _NormalScale ("法线贴图强度", range(0,1)) = 1
        
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }

        Pass
        {
            Name "StudyLM"
            Tags {
                "LightMode"="ForwardBase"
            }
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            #pragma multi_compile _S_PM _S_SPM _S_CSPM _S_RM

            #include "UnityCG.cginc"
            #include "AutoLight.cginc"
            #include "Lighting.cginc"
            #pragma multi_compile_fwdbase_fullshadows
            #pragma target 3.0

            struct a2v
            {
                float4 vertex : POSITION;
                float3 normal : NORMAL;
                float4 tangent : TEXCOORD0;
                float2 texcoords : TEXCOORD1;
            };

            struct v2f
            {
                float4 posCS : SV_POSITION;
                float4 posWS : TEXCOORD0;
                float3 normalDirWS : TEXCOORD1;
                float3 tangentDirWS : TEXCOORD2;
                float3 bitangentDirWS : TEXCOORD3;
                float2 uv0 : TEXCOORD4;
            };

            uniform sampler2D _MainTex;     uniform float4 _MainTex_ST;
            uniform sampler2D _NormalTex;   uniform float4 _NormalTex_ST;
            uniform sampler2D _HeightTex;
            uniform sampler2D _Cubemap;

            uniform half4 _MainCol;
            uniform half4 _Diffuse;
            uniform half4 _Specular;

            uniform half _Gloss;
            uniform half _HeightScale;
            uniform half _NormalScale;

            v2f vert (a2v v)
            {
                v2f o;
                o.posCS = UnityObjectToClipPos(v.vertex);
                o.posWS = mul(unity_ObjectToWorld, v.vertex);
                o.normalDirWS = normalize(UnityObjectToWorldNormal(v.normal));
                o.tangentDirWS = normalize(mul(unity_ObjectToWorld, v.tangent).xyz);  //切线
                o.bitangentDirWS = normalize(cross(o.normalDirWS,o.tangentDirWS) * v.tangent.w);  //副切线
                o.uv0 = v.texcoords;
            
                return o;
            }

            // 视差映射
            float2 ParallaxMapping(float2 texcoords, float3 viewDirTS)
            {
                // 高度采样
                float height = tex2D(_HeightTex, texcoords).r;

                // 视点方向越接近法线 UV偏移越小
                float2 offuv = viewDirTS.xy / viewDirTS.z * height * _HeightScale;

                return texcoords - offuv;
            }

            // 陡峭视差映射
            float2 SPM(float2 texCoords, float3 viewDirTS)
            {
                // 高度层数  
                /* trick--分层次数由视点方向与法向夹角来决定,当视点的方向和法线方向越靠近时,
                那么采样需要的偏离也越小,那么就可以采用较少的高度分层,反之则需要更多的分层。*/
                float minLayers = 20;
                float maxLayers = 100;
                float numLayers = lerp(maxLayers, minLayers, abs(dot(float3(0.0,0.0,1.0), viewDirTS)));
                // 每层高度
                float layerHeight = 1.0 / numLayers;
                // 当前层级高度
                float currentLayerHeight = 0.0;
                // 视点方向偏移量
                float2 offsetuv = viewDirTS.xy / viewDirTS.z * _HeightScale;
                // 每层高度偏移量
                float2 deltaTexCoords = offsetuv / numLayers;
                // 当前 UV
                float2 currentTexcoords = texCoords;
                // 当前UV位置高度图的值
                float currentHeightMapValue = tex2D(_HeightTex, currentTexcoords).r;

                while(currentLayerHeight < currentHeightMapValue)
                {
                    // 按高度层级进行UV偏移
                    currentTexcoords += deltaTexCoords;
                    // 从高度贴图采样获取当前UV位置的高度
                    currentHeightMapValue = tex2Dlod(_HeightTex, float4(currentTexcoords, 0, 0)).r;
                    // 采样点高度
                    currentLayerHeight += layerHeight;
                }

                return currentTexcoords;
            }

            // 视差遮蔽映射
            float2 Custom_SPM(float2 texCoords, float3 viewDirTS)
            {
                // 高度层数
                /* trick--分层次数由视点方向与法向夹角来决定,当视点的方向和法线方向越靠近时,
                那么采样需要的偏离也越小,那么就可以采用较少的高度分层,反之则需要更多的分层。*/
                float minLayers = 20;
                float maxLayers = 100;
                float numLayers = lerp(maxLayers, minLayers, abs(dot(float3(0.0,0.0,1.0), viewDirTS)));
                // 每层高度
                float layerHeight = 1.0 / numLayers;
                // 当前层级高度
                float currentLayerHeight = 0.0;
                // 视点方向偏移量
                float2 offsetuv = viewDirTS.xy / viewDirTS.z * _HeightScale;
                // 每层高度偏移量
                float2 deltaTexCoords = offsetuv / numLayers;
                // 当前 UV
                float2 currentTexcoords = texCoords;
                // 当前UV位置高度图的值
                float currentHeightMapValue = tex2D(_HeightTex, currentTexcoords).r;

                while(currentLayerHeight < currentHeightMapValue)
                {
                    // 按高度层级进行UV偏移
                    currentTexcoords += deltaTexCoords;
                    // 从高度贴图采样获取当前UV位置的高度
                    currentHeightMapValue = tex2Dlod(_HeightTex, float4(currentTexcoords, 0, 0)).r;
                    // 采样点高度
                    currentLayerHeight += layerHeight;
                }

                // 前一个采样点
                float2 preTexcoords = currentTexcoords - deltaTexCoords;

                // 线性插值
                float afterHeight = currentHeightMapValue - currentLayerHeight;
                float beforeHeight = tex2D(_HeightTex, preTexcoords).r - (currentLayerHeight - layerHeight);
                float weight = afterHeight / (afterHeight - beforeHeight);
                float2 finalTexcoords = preTexcoords * weight + currentTexcoords * (1.0 - weight);

                return finalTexcoords;
            }

            float2 ReliefMapping(float2 texCoords, float3 viewDirTS){
                float3 startPoint = float3(texCoords,0);
                float h = tex2D(_HeightTex, texCoords).r;
                viewDirTS.xy *= _HeightScale;
                int linearStep = 40;
                int binarySearch = 8;
                float3 offset = (viewDirTS/viewDirTS.z)/linearStep;
                for(int index=0;index<linearStep;index++){
                    float depth = 1 - h;
                    if (startPoint.z < depth){
                        startPoint += offset; 
                    }
                }
                float3 biOffset = offset;
                for (int index=0;index<binarySearch;index++){
                    biOffset = biOffset / 2;
                    float depth = 1 - h;
                    if (startPoint.z < depth){
                        startPoint += biOffset;
                    }else{
                        startPoint -= biOffset;
                    }
                }
                float2 finalTexCoords = startPoint.xy;

                return finalTexCoords;
            }

            float4 frag (v2f i) : SV_Target
            {
                // 准备向量
                half3x3 TBN = half3x3(i.tangentDirWS, i.bitangentDirWS, i.normalDirWS);

                half3 viewDirWS = normalize(_WorldSpaceCameraPos.xyz - i.posWS.xyz);
                half3 viewDirTS = normalize(mul(TBN, viewDirWS));
                // 视差映射
                #if _S_PM
                float2 pm_uv = ParallaxMapping(i.uv0, viewDirTS);
                // 陡峭视差映射
                #elif _S_SPM 
                float2 pm_uv = SPM(i.uv0, viewDirTS);
                // 视差遮蔽映射
                #elif _S_CSPM
                float2 pm_uv = Custom_SPM(i.uv0, viewDirTS);
                #else
                float2 pm_uv = ReliefMapping(i.uv0, viewDirTS);
                #endif
            
                
                half3 normalDirTS = UnpackNormal(tex2D(_NormalTex, pm_uv)).xyz;
                half3 normalDirWS = normalize(lerp(i.normalDirWS,mul(normalDirTS, TBN),_NormalScale));
                half3 lightDirWS = normalize(_WorldSpaceLightPos0.xyz);
                half3 halfDirWS = normalize(lightDirWS + viewDirWS);
                half3 reflectDirWS = normalize(reflect(-lightDirWS, normalDirWS));

                // 准备向量积
                half NdotL = dot(normalDirWS, lightDirWS);
                half NdotH = dot(normalDirWS, halfDirWS);
                half VdotR = dot(viewDirWS, reflectDirWS);

                // 光照模型
                // 环境光
                half4 MainTex = tex2D(_MainTex, pm_uv) * _MainCol;
                half3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz * _Diffuse * MainTex.rgb;
                
                half3 diffuse = lerp(ambient * _Diffuse.rgb * MainTex.rgb, max(0, NdotL) * _LightColor0.rgb * _Diffuse.rgb * MainTex, _Gloss / 255);
                // blinnphong
                half3 specular = pow(max(0, NdotH), _Gloss) * _LightColor0.rgb * _Specular.rgb;
                // phong
                //specular = pow(max(0, VdotR), _Gloss) * _LightColor0.rgb * _Specular.rgb;

                specular = lerp(diffuse * specular, specular, _Gloss / 255);

                half3 BlinnPhong = ambient + diffuse + specular;

                float3 finalRGB = BlinnPhong;
                return float4(finalRGB,1.0);
            }
            ENDCG
        }
    }
}

6. Gamma correction

  • Gamma correction

        1. Color space :

         2. Transfer function : ① The light-to-electricity transfer function is responsible for converting the linear light of the scene to the nonlinear video signal value. ②The electro-optical transfer function is responsible for converting the non-linear video signal value into display brightness.

        3. The concept of Gamma correction :

        That is , Gamma refers to the operation of encoding and decoding between linear three-color values ​​and nonlinear video signals .

         Example:

        

         4. Why do you need Gamma correction :

        ①The purpose of nonlinear conversion is mainly to optimize storage space and bandwidth, and the transfer function can better help us use the encoding space.

        ②Since the image data we use to display is 8bit, and the human eye is more sensitive to changes in the dark part , if you want to make full use of the bandwidth, you need to use more locations to store the dark part value. That is to say, the dark part is saved with higher precision, while the bright part is saved with lower precision.

  • Weber's law

 

         When the human eye perceives brightness, it is obviously more sensitive to changes in dark parts than in bright parts.

        1. Concept : That is, the difference threshold of sensation changes with the change of the original stimulus amount, and it shows a certain regularity. It is expressed by a formula, which is △Φ/Φ=C, where Φ is the original stimulus amount, △Φ is the difference threshold at this time, and C is a constant, also known as Weber rate.

        (When the stimulus is greater, the stimulus needs to be increased to be large enough to make people feel a significant change, but only for moderate intensity stimulation.)

        2. Conclusion :

        ①The human eye is more sensitive to changes in dark parts than bright parts.

        ②The true color format RGBA32 we are currently using has only 8 bits for each color channel to record information. In order to use bandwidth and storage space reasonably, non-linear conversion is required.

        ③At present, the sRGB color space standard commonly used by us has a transfer function gamma value of 2.2 (2.4).

        sRGB is very good, it can efficiently display images using data, but it will cause problems when we perform graphics operations. Because sRGB is a non-linear space. Because he gave more value to Anbu, 0.5 does not mean 0.5 but around 0.21.

  • CRT (cathode ray tube)

        1. Concept : The image displays we used in the early days were all CRTs. People found that the brightness of this device is not linearly related to the voltage, but the gamma value is about 2.2, which is similar to a power law relationship. (just enough to darken the brightened picture until it is displayed correctly)

         2.中灰值:中灰值也并非一个固定的具体数值,而是取决于视觉感受。

  • 线性工作流程

        1.概念:一种通过调整图像Gamma值,来使得图像得到线性化显示的技术流程。

        *如果使用Gamma空间的贴图,在传给着色器之前需要从Gamma空间转到线性空间。

        2.如果不在线性空间下渲染

        ①亮度叠加:可以看到非线性空间下亮度叠加出现了过曝(亮度>1的)的情况,因为Gamma空间经过gamma编码后的亮度值相对之前会变大。

        ②颜色混合:如果在混合前没有非线性的颜色进行转换,就会在纯色的边界出现一些黑边。

​        ③光照计算:在光照渲染结算时,如果我们把非线性空间下(视觉上的)的棕灰色0.5当做实际物理光强为0.5来计算时,就会出现左边这种情况。在显示空间下是0.5,但在渲染空间下它的实际物理光强为0.18(如右图)。

  •  Unity中颜色空间

        1.设置:

        ①当选择Gamma Space时,Unity不会做任何处理 

        ②当选择Linear Space时,引擎的渲染流程在线性空间计算,理想情况下项目使用线性空间的贴图颜色,不需要勾选sRGB,如果勾选了sRGB的贴图,会通过硬件特性采样时进行线性转换。

        2.硬件支持: 

        3. Hardware feature support

  •  Resource export settings

         1.SP

        When the texture of Substance is exported, the linear color value undergoes gamma transformation, and the color is brightened, so you need to check the sRGB option in Unity so that it can return to the linear value when sampling.

        2.PS  

        ①If you use linear space, generally speaking, you can not change anything in Photoshop, and you only need to check sRGB for the exported texture. If you adjust the gamma value of PhotoShop to 1, the exported texture does not need to check sRGB in Unity.

  

        ②Document Color Profile:

        PhotoShop is particularly accurate in color management. The colors seen in Unity have to undergo gamma conversion of the monitor, but PhotoShop does not. PhotoShop will read the Color Profile of the monitor and compensate back inversely.

        PhotoShop has a second Color Profile called Document Color Profile. Usually its default value is sRGB Color Profile, which is consistent with the Coor Profile of the display. The color is darkened by this Color Profile, so the result seen in PhotoShop is the same as in Unity.

        ③Translucent blending: The blending in Unity is linear blending. When blending between layers in Photoshop, each upper layer has undergone gamma transformation before blending. Change in the settings, select "Mix RGB colors with gamma", and set the parameter to 1, so that the layer is the result of direct mixing.  

7. LDR and HDR

  • basic concept

        1. HDR = High Dynamic Range 

                ① 8-bit precision (ie  2^{8}= 256 (0~255))

                ②Single channel 0-1

                ③Color picker, general picture, computer screen

                ④ Commonly used LDR image storage format jpg/png, etc. 

        2.LDR = Low Dynamic Range

                ① Much higher precision than 8 bits

                ②Single channel can exceed 1

                ③HDRI, real world 

                ④ Commonly used HDR image storage formats include hdr/tif/exr/raw, etc. (many of which are commonly used by cameras)

        3. Dynamic Range (Dynamic Range) = maximum brightness / minimum brightness

               Tone mapping: It is used to map the color from the original tone (usually high dynamic range, HDR) to the target tone (usually low dynamic range, LDR). The result of the mapping is displayed through the medium, and under the action of the visual characteristics of the human eye, the effect of restoring the original scene as much as possible is achieved.    

  • Why do you need HDR

        1. It can have better colors, higher dynamic range and richer details, and effectively prevent overexposure of the picture. Colors with a brightness value exceeding 1 can also be well represented. The brightness of pixels becomes normal, and the visual communication is more real.

        2. HDR can only have a value exceeding 1 to have the effect of bloom (bloom). High-quality bloom can reflect the rendering quality of the picture.

        3. Source image of HDR :

  • HDR in Unity

        1. Camera-HDR settings :

         2. Lightmap HDR settings :

        Selecting High Quality will enable HDR lightmap support, while Normal Quality will switch to using RGBM encoding.

        RGBM encoding: stores the color in the RGB channel and the multiplier (M) in the Alpha channel.

         3. HDR settings of the color picker:

         4. Advantages and disadvantages of HDR :

       advantage:

        ①The part of the picture whose brightness exceeds 1 will not be cut off, which increases the details of the bright part and reduces the exposure 

        ②Reduce the sense of gradation in the dark part of the screen   

        ③ Better support for bloom effect

        shortcoming:

        ①Slow rendering speed requires more video memory

        ②Does not support hardware anti-aliasing

        ③ Some low-end mobile phones do not support

  • HDR and Bloom

        1. Implementation process:

Render the original image → calculate the high-light pixels exceeding a certain threshold → perform Gaussian blur on the high-light pixels → superimpose halos and form images

        2. Unity Bloom process

  •  HDR and ToneMapping

        1. Concept:

The picture on the left is an example of a linear map, but it does not meet the real effect we know.

         2. ACES curve:

        ACES is the Academy Color Encoding System, the most popular and widely used Tonemapping mapping curve, the effect: the contrast is improved, and the details of dark and bright places are well preserved.

        

 

  • LUT(Lookup Table)

        1. Concept : It is a filter. Through LUT, in the LDR image, you can output one set of RGB values ​​into another set of RGB values, thereby changing the exposure and color of the picture.

        2. 3D LUT : adjust the LUT of RGB three channels

 

        3. The LUT can be adjusted in PS, and the exported LUT can be used as a filter to adjust the screen.

  • Operation 

IBL: Image-Based Lighting ( Image based lighting, IBL) is a collection of lighting techniques. Its light source is not a decomposable direct light source , but treats the surrounding environment as a whole as a large light source. There are four common types of IBLs used in modern rendering engines:

        ①The remote light probe is used to capture the light information at "infinity", and the parallax can be ignored. Remote probes typically include the sky, distant landscape features or buildings, etc. They can be captured by the rendering engine, or obtained from the camera in the form of high dynamic range images.
        ② Local light probes , used to capture a certain area of ​​the world from a specific angle. Snaps are projected onto a cube or sphere, depending on the surrounding geometry. Local probes are more accurate than remote probes, and are especially useful when adding local reflections to materials
.         ③Planar reflections , used to capture reflections by rendering mirrored scenes. This technique only works on flat surfaces such as building floors, roads and water.
        ④ Screen space reflection , based on the scene rendered using the ray marching method in the depth buffer, to capture the reflection. SSR works great, but can be very expensive.

        
Original link: https://blog.csdn.net/JMXIN422/article/details/123180206 (I don’t understand it very well for now)

Eight, FlowMap to achieve flow effect

  • FlowMap

        1. Concept : A texture that records 2D vector information , and the color on the Flow map (usually the RG channel ) records the direction of the vector field at that point, allowing a certain point on the model to show the characteristics of quantitative flow. By offsetting the uv in the shader and then sampling the texture, the flow effect is simulated.

        2. Advantages : easy to implement, and the amount of calculation is small. Similar to UV animation, not vertex animation. In other words, there is no need to operate on the vertices of the model, which is easy to implement and has low computational overhead. Not just the water surface, any effect related to flow can use flowmap.

        3. Example :

        4. Pre-understanding: UV mapping :        

 

         * Different engine uv coordinates may be different

  • FlowMap shader

        1. Basic process :

                ① Sampling Flow map to obtain vector field information

                ② Use the vector field information to make the UV when sampling the texture change with time

                ③The same texture is collected twice with a phase difference of half a cycle, and linearly interpolated to make the flow of the texture continuous

        2. Implementation method :

                ①Target: Make the texture offset over time according to the value on the flowmap.

                    Implementation method: uv-time (it is not the position of the vertex that is changed)

                    uv+time: At a certain point on the model, as the time increases, the farther the sampled pixels are, the farther away the pixels are offset to this point. The visual effect is opposite to the algorithm we intuitively recognize.

                ②Target: Obtain the flow direction and direction from the flow map

                    Implementation method: the flow map cannot be used directly, and the color value on the flow map must be mapped from the range of [0,1] to the range of the direction vector [-1,1].

//调整采样时的UV为: adjust_uv = uv - flowDir * time
//在这里我们用FlowSpeed来控制向量场的强度

//从flowmap获取向量
float3 flowDir = tex2D(_FlowMap, i.uv) * 2.0 - 1.0;

//flowSpeed影响向量强度,值越大,不同位置流速差越明显
flowDir *= -_FlowSpeed;

                ③Target: As time progresses, the deformation becomes more and more exaggerated, and the offset is controlled within a certain range; periodic, seamless loop

                    Implementation method: Weighted mixing with two layers of sampling with a phase difference of half a cycle, so that the unnatural situation when the texture flow restarts for one cycle is covered by another layer of sampling

         3. Modify the normal map with flow map :

  • Flow map creation

        1.Flowmap painter:

Download link: http://teckartist.com/?page_id=107

         2.Houdini Labs:

 

 

 

 

  •  Operation

Reference: https://blog.csdn.net/weixin_51327051/article/details/123577801

Learned the usage of #pragma shader_feature _REVERSE_ON

The left picture shows the vertex animation, which can only set the flow in one direction; the flowmap can modify the flow direction of any point. 

Shader "30_Flowmap"
{
    Properties
    {
        _MainTex ("MainTex", 2D) = "white" {}
        _FlowMap ("FlowMap", 2D) = "black" {}
        _FlowSpeed ("Intensity", range(0,10)) = 1.0
        _TimeSpeed ("FlowSpeed" , float) = 50.0
        [Toggle]_reverse("Reverse", Int) = 0
    }
    SubShader
    {
        Tags { "RenderType"="Opaque" }
        LOD 100
        Cull off
        Lighting off 
        ZWrite On

        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            // make fog work
            #pragma multi_compile_fog
            #pragma shader_feature _REVERSE_ON
            #include "UnityCG.cginc"

            sampler2D _MainTex; float4 _MainTex_ST;
            sampler2D _FlowMap; float4 _FlowMap_ST;
            float _FlowSpeed;
            float _TimeSpeed;

            struct appdata
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
                float2 uv : TEXCOORD0;
                //UNITY_FOG_COORDS(1)
                float4 vertex : SV_POSITION;
            };

            v2f vert (appdata v)
            {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = v.uv;
                //UNITY_TRANSFER_FOG(o,o.vertex);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
                //从flowmap获取向量,uv范围在0到1,向量范围在-1到1
                float4 flowDir = tex2D(_FlowMap, i.uv) * 2.0 - 1.0;
                //强度修正
                flowDir *= -_FlowSpeed;

                //正负修正
                #ifdef _REVERSE_ON
                    flowDir *= -1;
                #endif

                //周期
                float phase0 = frac(_Time * 0.1 * _TimeSpeed);
                float phase1 = frac(_Time * 0.1 * _TimeSpeed + 0.5);

                //设置平铺uv
                float2 uv = i.uv * _MainTex_ST.xy + _MainTex_ST.zw;

                //用波形函数周期化向量场方向,用偏移后的uv对材质进行偏移采样
                half3 tex0 = tex2D(_MainTex, uv - flowDir.xy * phase0);
                half3 tex1 = tex2D(_MainTex, uv - flowDir.xy * phase1);

                //构造函数计算随波形函数变化的权值,要使MainTex采样值在最大偏移时,其权值为0
                float flowLerp = abs((0.5 - phase0) / 0.5);
                half3 finalColor = lerp(tex0, tex1, flowLerp);
                float4 col = float4(finalColor,1.0);
                // apply fog
                //UNITY_APPLY_FOG(i.fogCoord, col);
                return col;
            }
            ENDCG
        }
    }
}

Nine, GPU hardware architecture

  • GPU

        1. GPU overview : Graphics Processing Unit, graphics processing unit . Its function was originally consistent with the name. It was a specific chip dedicated to drawing images and processing primitive data . Later, many other functions were gradually added.

        The GPU is the core component of the graphics card, and the graphics card also includes: radiators, communication components, and various slots connected to the motherboard and display.

        2. GPU physical architecture : Due to the introduction of nanotechnology, GPU can integrate hundreds of millions of transistors and electronic devices into a small chip. From the perspective of macroscopic physical structure, the size of most modern desktop GPUs is the same size as several coins, and some are even smaller than a coin. The graphics card cannot work independently, and needs to be loaded on the motherboard , and combined with CPU, memory, video memory, monitor and other hardware devices to form a complete PC.

        3. The common points of GPU micro-physical architecture :               

                GPC (graphics processing cluster), TPC (texture processing cluster), Thread (thread), SM, SMX, SMM (Stream Multiprocessor, stream multiprocessor) Warp thread warp, Warp Scheduler (Warp arranger), SP (Streaming Processor, stream processor), Core (core for performing mathematical operations), ALU (logical operation unit), FPU (floating point unit), SFU (special function unit), ROP (render outp ut unit, rendering input unit), Load/Store Unit (load storage unit), L1 Cache (L1 cache), L2 Cache (L2 cache), Shared Memory (shared memory), Register File (register)

        * Why do GPUs have so many layers and have so many similar parts? The answer is that GPU tasks are naturally parallel , and the architecture of modern GPUs is designed with a high degree of parallelism .

        Core component structure: GPC-->TPC-->SM-->CORE

                 SM includes Poly Morph Engine (polygon engine), L1 Cache (L1 cache), Shared Memory (shared memory), Core (core for performing mathematical operations), etc.

                CORE includes ALU, FPU, Execution Context (execution context), (Detch), decoding (Decode).

        

Guess you like

Origin blog.csdn.net/weixin_56784984/article/details/127940509