Unity shader - texture sampling

Table of contents

1. What is UV  

2. Bump texture

3. Gradient Texture Mapping

4. Mask texture


1. What is UV  

For 3D models, there are two most important coordinate systems, one is the position (X, Y, Z) coordinates of the vertices, and the other is the UV coordinates. What are UVs? Simply put, it is the basis for mapping the texture to the surface of the model. To be complete, it should be UVW (because XYZ has already been used, so choose another three letters to indicate). U and V are the coordinates of the picture in the horizontal and vertical directions of the display, and the values ​​are generally 0~1, that is (the Uth pixel in the horizontal direction / picture width, the Vth pixel in the vertical direction/picture height) . What about W? The texture is two-dimensional, how come three coordinates? Well, the direction of W is perpendicular to the surface of the display. It is generally used for procedural mapping or some 3D mapping technology (remember, there is indeed a concept of 3D mapping!), which is not commonly used in games, so we generally refer to it as UV for short. up.

Define the texture:

Properties
{
    // 主纹理
    _MaxTex ("_MaxTex",2d) = "white" {}
}

sampler2D _MaxTex;
float4 _MaxTex_ST;

_MainTex_ST: Indicates the offset scaling attribute of the texture, showing tilling and offset on the property panel, the float4 type xy stores the scaling and zw stores the offset

Recalculate uv texcord by _MainTex_ST *_MainTex_ST.xy+_MainTex_ST.zw

 Or use the built-in method TRANSFORM_TEX(uv,_MainTex)

tex2D(_MainTex,uv) samples _MainTex to get the color under the uv coordinates

The complete code: Here is the code with diffuse reflection and high light. If you don’t understand it, you can see it: diffuse reflection , high light reflection

Shader "Unlit/005"
{
    Properties
    {
        // 漫反射颜色
        _Diffuse ("Diffuse",Color) = (1,1,1,1)
        // 高光反射颜色值
        _Specular ("Specular",Color) = (1,1,1,1)
        // 高光反射值
        _Gloss ("_Gloss",range(1,100)) = 5
        // 主纹理
        _MaxTex ("_MaxTex",2d) = "white" {}
    }
    SubShader
    {
        Pass
        {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            
            sampler2D _MaxTex;
            float4 _MaxTex_ST;
            float4 _Specular;
            float4 _Diffuse;
            float _Gloss;
            
            struct v2f
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD1;
                float3 worldNormal : TEXCOORD0;
                float3 viewDir : TEXCOORD2;
            };

            v2f vert (appdata_base v)
            {
                v2f o;
                // 将对象空间中的点变换到齐次坐标中的摄像机裁剪空间
                o.vertex = UnityObjectToClipPos(v.vertex);
                // 计算uv坐标
                o.uv = TRANSFORM_TEX(v.texcoord,_MaxTex);
                // o.uv = v.texcoord.xy * _MaxTex_ST.xy + _MaxTex_ST.zw;
                o.worldNormal = UnityObjectToWorldNormal(v.normal);
                o.viewDir = normalize(WorldSpaceViewDir(o.vertex));
                return o;   
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // 纹理采样
                float3 texColor = tex2D(_MaxTex,i.uv);

                // 计算漫反射
                float3 diffuse = texColor * _LightColor0.rgb * _Diffuse.rgb * (dot(_WorldSpaceLightPos0.xyz,i.worldNormal) * 0.5 + 0.5);
                // 计算高光
                float3 halfVector = normalize(normalize(_WorldSpaceLightPos0.xyz) + i.viewDir);
                // 计算高光
                float3 specular = texColor * _LightColor0.rgb * _Specular.rgb * pow(max(0,dot(halfVector,i.worldNormal)),_Gloss);
                
                return float4(diffuse + specular,1);
            }
            ENDCG
        }
    }
}

2. Bump texture

The essence of the bump texture is to create a bump effect through a picture that stores the vertex normal coordinates, and the model itself has not changed

Why changing the normal can create a bumpy result, because the function of the normal is to calculate the light. In real life, the objects you see feel very three-dimensional, because the light casts shadows on the objects, and there is a distinction between light and dark, so I think it is very three-dimensional. By changing the normal, the bump map indirectly changes the direction of the light and the result of the shadow, which makes you look three-dimensional.

There are two types of bump textures:

  • Height map: The height map stores not the normal of the vertex, but the intensity value, indicating the altitude of the pixel. The darker the color, the more concave it is, because the more concave the more invisible, as shown in the above height map, the more The height of the white area is higher, and the height of the darker area is lower. It needs to be transferred to the surface normal calculation according to the gray value, which is more complicated, and it is not conducive to art production.

  • Normal map:  Stores the normal coordinates of vertices.

Stored in the normal texture is the normal direction of the surface. Since the range of each component of the normal direction is between [ − 1 , 1 ], and the range of the component of the pixel is between [ 0 , 1 ], it is necessary to map

pixel=normal∗0.5+0.5

Therefore, after we sample the normal texture in the Shader, we need to perform an inverse mapping on the result, that is

normal=pixel∗2−1

Why are normal maps mostly light blue?

Light blue means that the modified normal map stores the normal in the tangent space. The original normal of the vertex is (0, 0, 1), which is mapped to RGB (0.5, 0.5, 1) on the pixel, light blue.

Normal maps come in two flavors:

  • Normal coordinates in model space

  • Normal coordinates in tangent space

Model space normals:

Advantages : The normal of the model can be obtained directly without space transformation, because all the normals are in the same coordinate space, so the effect obtained by linear interpolation at the corners is smoother.

Disadvantage : Because the model space stores the absolute normal, it is only applicable to the model that creates the normal. If you change the model, you will not get the correct result, and you cannot perform uv offset.

Normal in tangent space

  • The degree of freedom is higher.
  • Normal textures in tangent space are relative normal textures that give reasonable results even on different meshes.
  • You can perform uv animation, and you can achieve the effect of bump movement by moving the uv coordinates of a texture.
  • Normal textures can be reused.
  • It can be compressed. In tangent space, the z-axis is always in the positive direction, and the z-coordinate can be derived from xy  = sqrt(1-max(0,dot(xy,xy))).

How to achieve?

To calculate lighting, it is necessary to unify the coordinate space, either in tangent space or in world space.

Tangent space: convert the direction of the light source and the direction of view to the tangent space in the vertex , and then calculate the lighting model in the fragment

World space: Convert the normal to the world space , because sometimes we need to use the normal coordinates and light direction in the world space, such as sampling the environment of the cubemap, because the texture coordinates are pixel by pixel, so we need to color in the fragment Sampling in the device and then performing a matrix transformation, one more matrix operation than the above.

Let's first implement the following texture mapping in tangent space:

Next, it is necessary to transform the direction of the light source and the direction of the viewing angle to the tangent space. To complete this operation, it is necessary to find the rotation matrix required for the transformation. What we know are the transformed three vectors:

  • b = (0,1,0)
  • t = (1,0,0)
  • n = (0,0,1)

Suppose the three vectors before transformation are:

  • b′ = ( xb,yb,zb) 
  • t′ = (xt,yt, zt)
  • n′ = (xn,yn,zn)

The transformation matrix is:

 It can be obtained that c1 = t′ , c2 = b′ , c3 = n′ .

shader code:

// Calculate the subtangent vector float3 biNormal = cross(normalize(v.normal),normalize(v.tangent.xyz))*v.tangent.w; // Calculate the rotation matrix float3x3 rotation = float3x3(v.tangent.xyz ,biNormal,v.normal);

You can use the built-in custom writing method of unity

TANGENT_SPACE_ROTATION

In UnityCG.cginc, it actually helps us write the tangent space rotation transformation matrix obtained above.

Shader "Unlit/006"
{
    Properties
    {
        // 漫反射颜色
        _Diffuse ("Diffuse",Color) = (1,1,1,1)
        // 高光反射颜色值
        _Specular ("Specular",Color) = (1,1,1,1)
        // 高光反射值
        _Gloss ("_Gloss",range(1,100)) = 5
        // 主纹理
        _MaxTex ("MaxTex",2d) = "white" {}
        // 法线纹理
        _BumpMap ("Bump Map",2d) = "white" {}
        // 控制凹凸程度
        _BumpScale ("Bump Scale",float) = 1
    }
    SubShader
    {
        Pass
        {
            CGPROGRAM 
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            
            sampler2D _MaxTex;
            float4 _MaxTex_ST;
            float4 _Specular;
            float4 _Diffuse;
            float _Gloss;
            
            sampler2D _BumpMap; 
            float4 _BumpMap_ST;
            float _BumpScale;
            
            struct v2f
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                float2 normalUv : TEXCOORD1;
                float3 lightDir : TEXCOORD2;
                float3 viewDir : TEXCOORD3;
            };

            v2f vert (appdata_tan v)
            {
                v2f o;
                // 将对象空间中的点变换到齐次坐标中的摄像机裁剪空间
                o.vertex = UnityObjectToClipPos(v.vertex);
                // 计算uv坐标
                o.uv = TRANSFORM_TEX(v.texcoord,_MaxTex);
                o.normalUv = v.texcoord.xy * _BumpMap_ST.xy + _BumpMap_ST.zw;
                
                // TANGENT_SPACE_ROTATION
                // 求副切线向量
	            float3 biNormal = cross(normalize(v.normal),normalize(v.tangent.xyz))*v.tangent.w;
	            // 求出旋转矩阵
            	float3x3 rotation = float3x3(v.tangent.xyz,biNormal,v.normal);
            	
            	// 求切线空间的光源方向
            	o.lightDir = normalize(mul(rotation,UnityWorldToObjectDir(_WorldSpaceLightPos0.xyz)));
            	// 切线空间视角方向
            	o.viewDir = normalize(mul(rotation,ObjSpaceLightDir(v.vertex)));
            	
                return o;   
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // 主纹理采样
                fixed4 texColor = tex2D(_MaxTex,i.uv);
                // 法线贴图采样
                fixed4 packedNormal = tex2D(_BumpMap,i.normalUv);
                fixed3 tangentNormal;
                tangentNormal.xy = (packedNormal.xy * 2 - 1 ) * _BumpScale;
                tangentNormal.z = sqrt(1 - max(0,dot(tangentNormal.xy,tangentNormal.xy)));
                
                // 用内置方法 UnpackNormal 图片类型为NormalMap  
                //fixed3 tangentNormal = UnpackNormal(packedNormal);  
                // tangentNormal.xy *= _BumpScale;

                // 计算漫反射
                float3 diffuse = texColor.rgb * _LightColor0.rgb * _Diffuse.rgb * (dot(i.lightDir,tangentNormal) * 0.5 + 0.5);
                // 计算高光
                float3 halfVector = normalize(i.lightDir + i.viewDir);
                // 计算高光
                float3 specular = texColor.rgb * _LightColor0.rgb * _Specular.rgb * pow(max(0,dot(halfVector,tangentNormal)),_Gloss);
                
                return float4(diffuse + specular,1);
            }
            ENDCG
        }
    }
}

Give the main texture map and normal map. The normal map can be made with the main texture in PhotoShop, filter->3D->generate normal map, but the next project is usually given by the artist

The shader effect is as follows: the left side has no normal lines, and the right side has normal line textures. It can be seen that adding normal line textures is more concave-convex.

We are doing the following with world space texture mapping:

We need to calculate the transformation matrix from tangent space to world space in the vertex shader, but an interpolation register can only store variables of float4 size at most, so for variables such as matrices, we need to define three float3 variables . Open storage. But in order to make full use of the space of the interpolation register, we define it as float4 type, and an extra dimension can be used to store the vertex position in the world space. The uv variable can also be defined as a float4 type, where the xy component is used to store the map texture coordinates, and the zw component is used to store the normal texture coordinates.

Directly upload the code, all detailed notes in the code:

Shader "Unlit/007"
{
    Properties
    {
        // 漫反射颜色
        _Diffuse ("Diffuse",Color) = (1,1,1,1)
        // 高光反射颜色值
        _Specular ("Specular",Color) = (1,1,1,1)
        // 高光反射值
        _Gloss ("_Gloss",range(1,100)) = 5
        // 主纹理
        _MaxTex ("MaxTex",2d) = "white" {}
        // 法线纹理
        _BumpMap ("Bump Map",2d) = "white" {}
        // 控制凹凸程度
        _BumpScale ("Bump Scale",float) = 1
    }
    SubShader
    {
        Pass
        {
            CGPROGRAM 
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            
            sampler2D _MaxTex;
            float4 _MaxTex_ST;
            float4 _Specular;
            float4 _Diffuse;
            float _Gloss;
            
            sampler2D _BumpMap; 
            float4 _BumpMap_ST;
            float _BumpScale;
            
            struct v2f
            {
                float4 vertex : POSITION;
                float4 uv : TEXCOORD0;
                float4 T2w0 : TEXCOORD1;
                float4 T2w1 : TEXCOORD2;
                float4 T2w2 : TEXCOORD3;
            };

            v2f vert (appdata_tan v)
            {
                v2f o;
                // 将对象空间中的点变换到齐次坐标中的摄像机裁剪空间
                o.vertex = UnityObjectToClipPos(v.vertex);
                // 计算uv坐标
                o.uv.xy = TRANSFORM_TEX(v.texcoord,_MaxTex);
                o.uv.zw = TRANSFORM_TEX(v.texcoord,_BumpMap);
                float2 normalUv = v.texcoord.xy * _BumpMap_ST.xy + _BumpMap_ST.zw;
               
                fixed3 worldTangent = UnityObjectToWorldDir(v.tangent.xyz);
                fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);
                fixed3 worldBiTangent = cross(worldTangent,worldNormal) * v.tangent.w;
            	fixed3 worldPos = UnityObjectToWorldDir(v.vertex.xyz);
            	
            	o.T2w0 = float4(worldTangent.x,worldBiTangent.x,worldNormal.x,worldPos.x); 
            	o.T2w1 = float4(worldTangent.y,worldBiTangent.y,worldNormal.y,worldPos.y); 
            	o.T2w2 = float4(worldTangent.z,worldBiTangent.z,worldNormal.z,worldPos.z); 
            	
                return o;   
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // 主纹理采样
                fixed4 texColor = tex2D(_MaxTex,i.uv.xy);
                // 法线贴图采样
                fixed4 packedNormal = tex2D(_BumpMap,i.uv.zw);
                
                
                //fixed3 tangentNormal;
                //tangentNormal.xy = (packedNormal.xy * 2 - 1 ) * _BumpScale;
                //tangentNormal.z = sqrt(1 - max(0,dot(tangentNormal.xy,tangentNormal.xy)));
                
                // 用内置方法 UnpackNormal 图片类型为NormalMap  
                fixed3 tangentNormal = UnpackNormal(packedNormal);  
                tangentNormal.xy *= _BumpScale;
                
                // 将切线空间法线变换到世界空间
                fixed3 worldNormal = normalize(fixed3(dot(i.T2w0.xyz,tangentNormal),dot(i.T2w1.xyz,tangentNormal),dot(i.T2w2.xyz,tangentNormal)));

                // 计算漫反射
                float3 diffuse = texColor.rgb * _LightColor0.rgb * _Diffuse.rgb * (dot(_WorldSpaceLightPos0.xyz,worldNormal) * 0.5 + 0.5);
                // 计算高光
                float3 viewDir = _WorldSpaceCameraPos.xyz - float3(i.T2w0.z,i.T2w1.z,i.T2w2.z);
                float3 halfVector = normalize(_WorldSpaceLightPos0.xyz + viewDir);
                // 计算高光
                float3 specular = texColor.rgb * _LightColor0.rgb * _Specular.rgb * pow(max(0,dot(halfVector,worldNormal)),_Gloss);
                
                return float4(diffuse + specular,1);
            }
            ENDCG
        }
    }
}

The effects are as follows: Unable to line map, tangent space texture mapping, world space texture mapping

3. Gradient Texture Mapping

Textures can be used not only to define the color of an object, but also to store properties of any surface. A common usage is to use gradient textures to control the results of diffuse lighting. Use a gradient image from cool to warm for texture sampling, and use the sampling results to calculate the diffuse model. An illustration-style rendering effect can be obtained, and the outline of the object is more obvious than the traditional diffuse reflection. This technique is used in many cartoon style renderings.

Shader "Unlit/008"
{
    Properties
    {
        // 漫反射颜色
        _Diffuse ("Diffuse",Color) = (1,1,1,1)
        // 高光反射颜色值
        _Specular ("Specular",Color) = (1,1,1,1)
        // 高光反射值
        _Gloss ("_Gloss",range(1,100)) = 5
        // 主纹理
        _MaxTex ("_MaxTex",2d) = "white" {}
    }
    SubShader
    {
        Pass
        {
            CGPROGRAM 
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            
            sampler2D _MaxTex;
            float4 _MaxTex_ST;
            float4 _Specular;
            float4 _Diffuse;
            float _Gloss;
            
            struct v2f
            {
                float4 vertex : POSITION;
                float3 worldNormal : TEXCOORD0;
                float3 viewDir : TEXCOORD2;
            };

            v2f vert (appdata_base v)
            {
                v2f o;
                // 将对象空间中的点变换到齐次坐标中的摄像机裁剪空间
                o.vertex = UnityObjectToClipPos(v.vertex);
                // 计算uv坐标
                //o.uv = TRANSFORM_TEX(v.texcoord,_MaxTex);
                // o.uv = v.texcoord.xy * _MaxTex_ST.xy + _MaxTex_ST.zw;
                o.worldNormal = UnityObjectToWorldNormal(v.normal);
                o.viewDir = normalize(WorldSpaceViewDir(o.vertex));
                return o;   
            }

            fixed4 frag (v2f i) : SV_Target
            {
                // 半兰伯特漫反射
                float halfLambert = (dot(_WorldSpaceLightPos0.xyz,i.worldNormal) * 0.5 + 0.5);
                // 采样渐变纹理 使用半兰伯特做uv坐标
                float3 texColor = tex2D(_MaxTex,float2(halfLambert,halfLambert));
                float3 diffuse = texColor * _LightColor0.rgb * _Diffuse.rgb * texColor;
                // 计算高光
                float3 halfVector = normalize(normalize(_WorldSpaceLightPos0.xyz) + i.viewDir);
                // 计算高光
                float3 specular = texColor * _LightColor0.rgb * _Specular.rgb * pow(max(0,dot(halfVector,i.worldNormal)),_Gloss);
                
                return float4(diffuse + specular,1);
            }
            ENDCG
        }
    }
}

Effect:

4. Mask texture

In the terrain scene in the game, it is assumed that there is a border between the grass and the desert. In order to protect the reflection value of the respective areas, we will use a mask texture. It is assumed that the grass needs highlights, and the desert does not need highlights.

The implementation is very simple, that is, add a mask texture, the highlight is white, and the unnecessary place is black, because the rgb value of white is 255/255 = 1. Usually, only one channel is needed, and then the sampling value of the mask is multiplied at the place where the highlight is calculated.

Shader "Unlit/009"
{
    Properties
    {
        // 漫反射颜色
        _Diffuse ("Diffuse",Color) = (1,1,1,1)
        // 高光反射颜色值
        _Specular ("Specular",Color) = (1,1,1,1)    
        // 高光反射值
        _Gloss ("_Gloss",range(1,100)) = 5
        // 主纹理
        _MaxTex ("_MaxTex",2d) = "white" {}
        // 高光遮罩纹理
        _SpecularMask ("Specular Mask",2d) =  "white" {}
    }
    SubShader
    {
        Pass
        {
            CGPROGRAM 
            #pragma vertex vert
            #pragma fragment frag
            
            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            
            sampler2D _MaxTex;
            float4 _MaxTex_ST;
            sampler2D _SpecularMask;
            float4 _SpecularMask_ST;
            float4 _Specular;
            float4 _Diffuse;
            float _Gloss;
            
            struct v2f
            {
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
                float2 maskUv : TEXCOORD1;
                float3 worldNormal : TEXCOORD2;
                float3 viewDir : TEXCOORD3;
            };

            v2f vert (appdata_base v)
            {
                v2f o;
                // 将对象空间中的点变换到齐次坐标中的摄像机裁剪空间
                o.vertex = UnityObjectToClipPos(v.vertex);
                // 计算uv坐标
                o.uv = TRANSFORM_TEX(v.texcoord,_MaxTex);
                o.maskUv = TRANSFORM_TEX(v.texcoord,_SpecularMask);
                // o.uv = v.texcoord.xy * _MaxTex_ST.xy + _MaxTex_ST.zw;
                o.worldNormal = UnityObjectToWorldNormal(v.normal);
                o.viewDir = normalize(WorldSpaceViewDir(o.vertex));
                return o;   
            }

            fixed4 frag (v2f i) : SV_Target
            {
                float3 texColor = tex2D(_MaxTex,i.uv);
                // 半兰伯特漫反射
                float halfLambert = (dot(_WorldSpaceLightPos0.xyz,i.worldNormal) * 0.5 + 0.5);
                // 采样渐变纹理 使用半兰伯特做uv坐标
                float3 diffuse = texColor * _LightColor0.rgb * _Diffuse.rgb * halfLambert;
                // 计算高光
                float3 halfVector = normalize(normalize(_WorldSpaceLightPos0.xyz) + i.viewDir);
                // 采样遮罩纹理
                float3 maskColor = tex2D(_SpecularMask,i.maskUv);
                // 计算高光
                float3 specular = texColor * _LightColor0.rgb * _Specular.rgb * pow(max(0,dot(halfVector,i.worldNormal)),_Gloss) * maskColor;
                
                return float4(diffuse + specular,1);
            }
            ENDCG
        }
    }
}

Terrain map:       Mask map: 

 The effect is as follows: the left is without mask texture, and the right is with mask texture.

It can be seen that there are highlights on both sides of the unmasked texture.

Guess you like

Origin blog.csdn.net/weixin_41316824/article/details/131342620