"Introduction to Unity Shader Essentials" Chapter 10 Advanced Textures

Chapter 10 Advanced Texturing

10.1 Cube Textures

In graphics, 立方体纹理 (Cubemap)it is an implementation method of environment mapping (Environment Mapping).
Different from the textures seen before, the cube texture contains a total of 6 images, which correspond to the 6 faces of a cube, and the name of the cube texture comes from this. Each face of the cube represents the image viewed along an axis in world space (up, down, left, right, front, back).
To sample a cube texture we need to provide a 3D texture coordinate that represents our 3D orientation in world space. This direction vector starts from the center of the cube, and when it extends outward, it will intersect with one of the six textures of the cube, and the result of sampling is calculated from the intersection point, as shown in the following figure: Use cube
insert image description here
texture The advantage of is that its implementation is simple and fast, and the effect obtained is relatively good.
But it also has some disadvantages. For example, when new objects, light sources are introduced in the scene, or objects move, we need to regenerate the cube texture. In addition, the cube texture can only reflect the environment, but not the object itself that uses the cube texture. This is because cube textures cannot simulate the result of multiple reflections. So we should try to use cube textures for convex surfaces rather than concave surfaces , because concave surfaces reflect themselves.
Cube textures have many applications in real-time rendering, most commonly for skyboxes and environment mapping .

10.1.1 Skybox

天空盒子 (Skybox)Is a method used in games to simulate backgrounds. The name skybox contains two information: it is used to simulate the sky (although we can still use it to simulate backgrounds such as indoors), and it is a box. When we use a skybox in the scene, the whole scene is surrounded by a cube. The technique used for each face of this cube is cube texture mapping.
Here's how to customize the skybox in Unity:

  1. First create a material and change its shader to the built-in Skyboxinsert image description here
  2. Set the textures corresponding to the 6 faces of the skybox. We need to set the Wrap Mode of these 6 textures to Clamp to prevent mismatches at the seams
    insert image description here
  3. Window -> Rendering -> LightingAssign the new material to the skybox material option in the window
    insert image description here
  4. Create a new scene, add a camera, and Clear Flagsset to Skybox, you can get the following skybox effect
    insert image description here

It should be noted that the skybox set in Window → Rendering → Lighting will be applied to all cameras in the scene. If we want certain cameras to use a different skybox, we can override the previous settings by adding a Skybox component to that camera.
In Unity, the skybox is rendered after all opaque objects, and the mesh used behind it is a cube or a tessellated sphere.

10.1.2 Creating cube textures for environment mapping

In Unity, there are three ways to create cube textures for environment mapping: the first method is to create directly from some special layout textures; the second method is to manually create a Cubemap resource, and then assign 6 images to it ; The third method is generated by a script.
If the first method is used, we need to provide a texture with a special layout, such as a cross layout similar to a cube expansion diagram, a panoramic layout, etc. Then, we only need to set the Texture Shape of the texture to Cube, and then set the layout properties of the texture in the Mapping option, and Unity will do the rest for us.
insert image description here
The second method is the old creation method, we can Assets > Create > Legacy > Cubemapcreate a Cubemap and then drag 6 textures into its panel. It is now officially recommended to use the first method to create cube textures, because the first method can compress texture data and can support functions such as edge correction, glossy reflection and HDR.
insert image description here
Both of the previous two methods require us to prepare images of cube textures in advance, and the cube textures they get are often shared by objects in the scene. But ideally, we want to generate different cube textures for objects depending on their position in the scene. At this point, we can use scripts to create in Unity. Camera.RenderToCubemapThis is achieved by utilizing the function provided by Unity . Camera.RenderToCubemapThe function can store the scene image observed from any position into 6 images, so as to create the corresponding cube texture at the position, the code is as follows:

using UnityEngine;
using UnityEditor;

public class RenderCubeMap: ScriptableWizard {
    
    

    public Transform renderFromPosition;
    public Cubemap cubemap;

    [MenuItem("Cubemap/RenderCubeMap")]
    private static void MenuEntryCall() {
    
    
        DisplayWizard<RenderCubeMap>("Render Cubemap", "Render");
    }

    private void OnWizardCreate() {
    
    
        var go = new GameObject("CubemapCamera");
        go.AddComponent<Camera>();
        go.transform.position = renderFromPosition.position;
        go.GetComponent<Camera>().RenderToCubemap(cubemap);
        DestroyImmediate(go);
    }
}

After putting the above code into the project, we follow the steps below to create a Cubemap:

  1. Create an empty GameObject object. We'll use the GameObject's position information to render the cube texture
  2. By Assets > Create > Legacy > Cubemapcreating a Cubemap and checking its Readableoptions
  3. Cubemap > RenderCubeMapOpen the window viainsert image description here
  4. Drag the GameObject created in step 1 and the Cubemap created in step 2 to the Render From Position and Cubemap options in the window respectively, and click the Render button to generate the Cubemap.

It should be noted that we need to set the Face size for the Cubemap. The larger the Face size value, the larger the resolution of the rendered cube texture, and the effect may be better, but the memory required is also larger, which can be displayed at the bottom of the panel The memory size is obtained.
With the required cube textures ready, we can use environment mapping techniques on the object. The most common applications of environment mapping are reflection and refraction.

10.1.3 Reflection

It is very simple to simulate the reflection effect. We only need to calculate the reflection direction through the direction of the incident light and the surface normal direction, and then use the reflection direction to sample the cube texture. code show as below:

Shader "Chapter 10/ReflectionShader"
{
    
    
    Properties
    {
    
    
        _Color ("Color Tint", Color) = (1, 1, 1, 1)
        // 反射颜色
        _ReflectColor ("Reflection Color", Color) = (1, 1, 1, 1)
        // 反射程度
        _ReflectAmount ("Reflect Amount", Range(0, 1)) = 1
        // 模拟反射的环境映射纹理
        _Cubemap ("Reflection Cubemap", Cube) = "_Skybox" {
    
    }
    }
    SubShader
    {
    
    
        Tags {
    
     "RenderType"="Opaque" }
        LOD 100

        Pass
        {
    
    
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            #include "AutoLight.cginc"

            fixed4 _Color;
            fixed4 _ReflectColor;
            float _ReflectAmount;
            // 声明 Cubemap 使用 samplerCUBE
            samplerCUBE _Cubemap;

            struct appdata
            {
    
    
                float4 vertex : POSITION;
                float3 normal : NORMAL;
            };

            struct v2f
            {
    
    
                float4 pos : SV_POSITION;
                float3 worldNormal : TEXCOORD0;
                float3 worldPos : TEXCOORD1;
                float3 worldViewDir : TEXCOORD2;
                float3 worldReflection : TEXCOORD3;
                SHADOW_COORDS(4)
            };

            v2f vert (appdata v)
            {
    
    
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.worldNormal = UnityObjectToWorldNormal(v.normal);
                o.worldPos = mul(unity_ObjectToWorld, v.vertex);
                o.worldViewDir = UnityWorldSpaceViewDir(o.worldPos);
                // 使用内置函数 reflect 计算反射方向
                o.worldReflection = reflect(-o.worldViewDir, o.worldNormal);
                TRANSFER_SHADOW(o);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
    
    
                fixed3 worldNormal = normalize(i.worldNormal);
                fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
                fixed3 worldViewDir = normalize(i.worldViewDir);
                fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;
                fixed3 diffuse = _LightColor0.rgb * _Color.rgb * max(0, dot(worldNormal, worldLightDir));
                // 对立方体纹理的采样需要使用CG的texCUBE 函数
                // 我们在采样时并没有对i.worldRefl进行归一化操作。这是因为,用于采样的参数仅仅是作为方向变量传递给texCUBE函数的,因此我们没有必要进行归一化
                fixed3 reflection = texCUBE(_Cubemap, i.worldReflection).rgb * _ReflectColor.rgb;
                UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
                // 使用_ReflectAmount来混合漫反射颜色和反射颜色,并和环境光照相加后返回, lerp 为线性插值函数
                fixed3 color = ambient + lerp(diffuse, reflection, _ReflectAmount) * atten;
                return fixed4(color, 1.0);
            }
            
            ENDCG
        }
    }
}

In the calculation above, we chose to calculate the reflection direction in the vertex shader. Of course, we can also choose to calculate in the fragment shader, so that the effect obtained is more delicate. However, this difference tends to be negligible for the vast majority of people, so for performance reasons we chose to calculate the reflection direction in the vertex shader. The final effect is as follows:

insert image description here

10.1.4 Refraction

The physics of refraction are a bit more complex than reflection. We have been exposed to the definition of refraction in junior high school physics: when light is obliquely incident from one medium (such as air) into another medium (such as glass), the direction of propagation will generally change. When given the angle of incidence, we can use 斯涅尔定律 (Snell's Law)to calculate the angle of reflection. When the light from medium 1 obliquely enters the medium 2 along the direction of the angle θ1 with the surface normal, we can use the following formula to calculate the angle θ2 between the refracted light and the normal: where η1 and η2 are the two media
insert image description here
’s Refractive index (index of refraction).
insert image description hereGenerally speaking, when we get the refraction direction, we directly use it to sample the cube texture, but this is not in line with the laws of physics. A more accurate simulation of a transparent object would require computing refraction twice -- once when light enters its interior, and once when light exits it. However, it is more complicated to simulate the second refraction direction in real-time rendering, and the effect obtained by only simulating it once looks "quite like that" visually. As we mentioned before - the first rule in graphics "if it looks right, it's right". Therefore, in realtime rendering we usually only simulate the first refraction . code show as below:

Shader "Chapter 10/Refraction"
{
    
    
    Properties
    {
    
    
        _Color ("Color Tint", Color) = (1, 1, 1, 1)
        _RefractColor("Refraction Color", Color) = (1, 1, 1, 1)
        _RefractAmount("Refraction Amount", Range(0, 1)) = 1
        // 入射光线所在介质的折射率和折射光线所在介质的折射率之间的比值
        _RefractRatio("Refraction Ratio", Range(0.1, 1)) = 0.5
        _Cubemap ("Refraction Cubemap", Cube) = "_Skybox" {
    
    }
    }
    SubShader
    {
    
    
        Tags {
    
     "RenderType"="Opaque" }
        LOD 100

        Pass
        {
    
    
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            #include "AutoLight.cginc"

            fixed4 _Color;
            fixed4 _RefractColor;
            float _RefractAmount;
            float _RefractRatio;
            samplerCUBE _Cubemap;

            struct appdata
            {
    
    
                float4 vertex : POSITION;
                float3 normal : NORMAL;
            };

            struct v2f
            {
    
    
                float4 pos : SV_POSITION;
                float3 worldNormal : TEXCOORD0;
                float3 worldPos : TEXCOORD1;
                float3 worldViewDir : TEXCOORD2;
                float3 worldRafraction : TEXCOORD3;
                SHADOW_COORDS(4)
            };

            v2f vert (appdata v)
            {
    
    
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.worldNormal = UnityObjectToWorldNormal(v.normal);
                o.worldPos = mul(unity_ObjectToWorld, v.vertex);
                o.worldViewDir = UnityWorldSpaceViewDir(o.worldPos);
                // 使用内置函数 refract 计算折射角度
                o.worldRafraction = refract(-normalize(o.worldViewDir), normalize(o.worldNormal), _RefractRatio);
                TRANSFER_SHADOW(o);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
    
    
                fixed3 worldNormal = normalize(i.worldNormal);
                fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
                fixed3 worldViewDir = normalize(i.worldViewDir);
                fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;
                fixed3 diffuse = _LightColor0.rgb * _Color.rgb * max(0, dot(worldNormal, worldLightDir));
                fixed3 refraction = texCUBE(_Cubemap, i.worldRafraction).rgb * _RefractColor.rgb;
                UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
                // 使用_ReflectAmount来混合漫反射颜色和反射颜色,并和环境光照相加后返回, lerp 为线性插值函数
                fixed3 color = ambient + lerp(diffuse, refraction, _RefractAmount) * atten;
                return fixed4(color, 1.0);
            }
            
            ENDCG
        }
    }
}

We used CG's refractfunction to calculate the refraction direction. Its first parameter is the direction of the incident light, which must be a normalized vector; the second parameter is the surface normal, and the normal direction also needs to be normalized; the third parameter is the incident The ratio between the refractive index of the medium where the ray is located and the refractive index of the medium where the refracted ray is located. For example, if light is shot from air to the glass surface, then this parameter should be the ratio between the refractive index of the air and the refractive index of the glass, that is 1/1.5. Its return value is the calculated refraction direction, and its modulus is equal to the modulus of the incident light.
The effect is as follows:
insert image description here

10.1.5 Fresnel reflection

In real-time rendering, we often use Fresnel reflection to control the degree of reflection according to the viewing direction.
In layman's terms, Fresnel reflection describes an optical phenomenon, that is, when light hits the surface of an object, part of it is reflected, and part of it enters the interior of the object and undergoes refraction or scattering. There is a certain ratio between the reflected light and the incident light, and this ratio can be calculated by the Fresnel equation. An often used example is when you stand on the edge of a lake and look directly down at the water at your feet, you will find the water almost transparent; however, when you look up at the water in the distance, you will find it is almost invisible to the underwater scene. This is the so-called Fresnel effect.
The real-world Fresnel equation is very complicated, but in real-time rendering, we usually use some approximate formulas to calculate. One of the well-known approximate formulas is Schlick 菲涅耳近似等式:
insert image description here
where F0 is a reflection coefficient that controls the strength of the Fresnel reflection, v is the viewing direction, and n is the surface normal.
Another equation that is more widely used is Empricial 菲涅耳近似等式:

![Insert picture description here](https://img-blog.csdnimg.cn/6aa7258e3e5b454985d3403dfebcfeb1.pnginsert image description here

Among them, bias, scale and power are controls.
The following Schlick 菲涅耳近似等式is the Shader implementation of :

Shader "Chapter 10/Fresnel"
{
    
    
    Properties
    {
    
    
        _Color ("Color Tint", Color) = (1, 1, 1, 1)
        _FresnelScale ("Fresnel Scale", Range(0, 1)) = 0.5
        _Cubemap ("Cubemap", Cube) = "_Skybox" {
    
    }
    }
    SubShader
    {
    
    
        Tags {
    
     "RenderType"="Opaque" }
        LOD 100

        Pass
        {
    
    
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"
            #include "Lighting.cginc"
            #include "AutoLight.cginc"

            fixed4 _Color;
            float _FresnelScale;
            samplerCUBE _Cubemap;

            struct appdata
            {
    
    
                float4 vertex : POSITION;
                float3 normal : NORMAL;
            };

            struct v2f
            {
    
    
                float4 pos : SV_POSITION;
                float3 worldPos : TEXCOORD0;
                float3 worldNormal : TEXCOORD1;
                float3 worldViewDir : TEXCOORD2;
                float3 worldReflection : TEXCOORD3;
                SHADOW_COORDS(4)
            };

            v2f vert (appdata v)
            {
    
    
                v2f o;
                o.pos = UnityObjectToClipPos(v.vertex);
                o.worldPos = mul(unity_ObjectToWorld, v.vertex);
                o.worldNormal = UnityObjectToWorldNormal(v.normal);
                o.worldViewDir = UnityWorldSpaceViewDir(o.worldPos);
                o.worldReflection = reflect(-o.worldViewDir, o.worldNormal);
                TRANSFER_SHADOW(o);
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
    
    
                fixed3 worldNormal = normalize(i.worldNormal);
                fixed3 worldViewDir = normalize(i.worldViewDir);
                fixed3 worldLightDir = normalize(UnityWorldSpaceLightDir(i.worldPos));
                fixed3 ambient = UNITY_LIGHTMODEL_AMBIENT.xyz;
                UNITY_LIGHT_ATTENUATION(atten, i, i.worldPos);
                fixed3 reflection = texCUBE(_Cubemap, i.worldReflection).rgb;
                // 使用Schlick菲涅耳近似等式来计算fresnel变量,并使用它来混合漫反射光照和反射光照
                fixed fresnel = _FresnelScale + (1-_FresnelScale)*pow(1-dot(worldViewDir, worldNormal), 5);
                fixed3 diffuse = _LightColor0.rgb * _Color.rgb * max(0, dot(worldNormal, worldLightDir));
                fixed3 color = ambient + lerp(diffuse, reflection, saturate(fresnel)) * atten;
                return fixed4(color, 1.0);
            }
            ENDCG
        }
    }
}

You can adjust Fresnel Scalethe parameters to see different effects. When _FresnelScale is adjusted to 1, the object will completely reflect the image in the Cubemap; when _FresnelScale is 0, it is a diffuse reflection object with edge lighting effect:
insert image description here
insert image description here

10.2 Rendering Textures

Modern GPUs allow us to render entire 3D scenes into an intermediate buffer, i.e. 渲染目标纹理 (Render Target Texture,RTT), instead of a traditional framebuffer or 后备缓冲(back buffer).
Relatedly 多重渲染目标 (Multiple Render Target,MRT), this technique refers to the fact that the GPU allows us to render a scene to multiple render target textures simultaneously, rather than having to render the entire scene for each render target texture separately. Deferred rendering is an application that uses multiple render targets.
Unity defines a special texture type for render target textures - 渲染纹理 (Render Texture). There are generally two ways to use render textures in Unity:

  1. Create a rendering texture in the Project directory, and then set the rendering target of a certain camera to the rendering texture, so that the rendering result of the camera will be updated to the rendering texture in real time instead of being displayed on the screen.
  2. Another way is to use GrabPassthe command or OnRenderImagefunction to get the current screen image during screen post-processing. Unity will put this screen image into a rendering texture equal to the screen resolution. Next, we can put in the custom Pass They are treated as ordinary textures to achieve various screen effects

10.2.1 Mirror effect

In this section, we will learn how to use rendering textures to simulate mirror effects. The principle of mirror implementation is very simple. It uses a rendering texture as an input attribute, and flips the rendering texture in the horizontal direction and displays it directly on the object:

Shader "Chapter 10/Mirror"
{
    
    
    Properties
    {
    
    
        _MainTex ("Texture", 2D) = "white" {
    
    }
    }
    SubShader
    {
    
    
        Tags {
    
     "RenderType"="Opaque" }
        LOD 100

        Pass
        {
    
    
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

            struct appdata
            {
    
    
                float4 vertex : POSITION;
                float2 uv : TEXCOORD0;
            };

            struct v2f
            {
    
    
                float2 uv : TEXCOORD0;
                float4 vertex : SV_POSITION;
            };

            sampler2D _MainTex;
            float4 _MainTex_ST;

            v2f vert (appdata v)
            {
    
    
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.uv = TRANSFORM_TEX(v.uv, _MainTex);
                // 镜子里显示的图像都是左右相反的,所以需要对x轴进行翻转
                o.uv.x = 1-o.uv.x;
                return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
    
    
                fixed4 col = tex2D(_MainTex, i.uv);
                return col;
            }
            ENDCG
        }
    }
}
  1. We can create a quadrilateral (Quad), adjust its position and size, it will act as a mirror, and assign the material using the above Shader to it.
  2. Create a rendering texture in the Project view, then create a camera, place the created rendering texture on the Target Texture of the camera, and adjust the camera's position, clipping plane, viewing angle, etc., so that its displayed image is the mirror image we want
  3. Set the Shader's Main Tex as the created rendering texture, and place a few objects randomly in front of the mirror to observe the specular reflection effectinsert image description here

10.2.2 Glass effect

In Unity, we can also use a special Pass in the Unity Shader to accomplish the purpose of obtaining screen images, that's it GrabPass. When we define one in Shader GrabPass, Unity will draw the image of the current screen in a texture so that we can access it in the subsequent Pass. Next we will use GrabPassto achieve a simple glass effect.
Shader code is as follows:

Shader "Chapter 10/GlassRefraction"
{
    
    
    Properties
    {
    
    
        _MainTex ("Texture", 2D) = "white" {
    
    }
        // 方块纹理,用于玻璃反射效果
        _Cubemap ("Cubemap", Cube) = "_Skybox" {
    
    }
        // 法线纹理,用于玻璃折射效果
        _BumpMap ("Normal Map", 2D) = "bump" {
    
    }
        // 控制模拟折射时图像的扭曲程度
        _Distortion ("Distortion", Range(0, 100)) = 10
        // 控制折射比率
		_RefractAmount ("Refract Amount", Range(0.0, 1.0)) = 1.0
    }
    SubShader
    {
    
    
        // 需要把物体的渲染队列设置成透明队列
        // 这样才可以保证当渲染该物体时,所有的不透明物体都已经被绘制在屏幕上,从而获取正确的屏幕图像
        Tags {
    
     "Queue"="Transparent" "RenderType"="Opaque"  }
        LOD 100

        // 获取当前屏幕渲染纹理
		GrabPass {
    
     "_RefractionTex" }

        Pass
        {
    
    
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            #include "UnityCG.cginc"

			sampler2D _MainTex;
			float4 _MainTex_ST;
			sampler2D _BumpMap;
			float4 _BumpMap_ST;
			samplerCUBE _Cubemap;
			float _Distortion;
			fixed _RefractAmount;
            // 对应 GrabPass 指定的纹理名
			sampler2D _RefractionTex;
			float4 _RefractionTex_TexelSize;

            struct appdata
            {
    
    
                float4 vertex : POSITION;
				float3 normal : NORMAL;
                float4 tangent : TANGENT; 
                float2 texcoord : TEXCOORD0;
            };

            struct v2f
            {
    
    
				float4 pos : SV_POSITION;
				float4 scrPos : TEXCOORD0;
				float4 uv : TEXCOORD1;
				float4 TtoW0 : TEXCOORD2;  
			    float4 TtoW1 : TEXCOORD3;  
			    float4 TtoW2 : TEXCOORD4; 
            };

            v2f vert (appdata v)
            {
    
    
				v2f o;
				o.pos = UnityObjectToClipPos(v.vertex);
				
                // 通过调用内置的 ComputeGrabScreenPos 函数来得到对应被抓取的屏幕图像的采样坐标
				o.scrPos = ComputeGrabScreenPos(o.pos);
				
                // 计算 _MainTex 和_ BumpMap 的采样坐标,并把它们分别存储在一个float4类型变量的xy和zw分量中
				o.uv.xy = TRANSFORM_TEX(v.texcoord, _MainTex);
				o.uv.zw = TRANSFORM_TEX(v.texcoord, _BumpMap);

                // 由于我们需要在片元着色器中把法线方向从切线空间变换到世界空间下,以便对 Cubemap 进行采样,
                // 因此,我们需要在这里计算该顶点对应的从切线空间到世界空间的变换矩阵,
                // 并把该矩阵的每一行分别存储在TtoW0、TtoW1和TtoW2的xyz分量中
				float3 worldPos = mul(unity_ObjectToWorld, v.vertex).xyz;  
				fixed3 worldNormal = UnityObjectToWorldNormal(v.normal);  
				fixed3 worldTangent = UnityObjectToWorldDir(v.tangent.xyz);  
				fixed3 worldBinormal = cross(worldNormal, worldTangent) * v.tangent.w; 
				o.TtoW0 = float4(worldTangent.x, worldBinormal.x, worldNormal.x, worldPos.x);  
				o.TtoW1 = float4(worldTangent.y, worldBinormal.y, worldNormal.y, worldPos.y);  
				o.TtoW2 = float4(worldTangent.z, worldBinormal.z, worldNormal.z, worldPos.z);  
				
				return o;
            }

            fixed4 frag (v2f i) : SV_Target
            {
    
    
                float3 worldPos = float3(i.TtoW0.w, i.TtoW1.w, i.TtoW2.w);
				fixed3 worldViewDir = normalize(UnityWorldSpaceViewDir(worldPos));
				
				//获取切线空间下的法线向量
				fixed3 bump = UnpackNormal(tex2D(_BumpMap, i.uv.zw));	
				float2 offset = bump.xy * _Distortion * _RefractionTex_TexelSize.xy;
				i.scrPos.xy = offset * i.scrPos.z + i.scrPos.xy;
                // 对屏幕纹理采样获得折射颜色
				fixed3 refrCol = tex2D(_RefractionTex, i.scrPos.xy/i.scrPos.w).rgb;
				
                // 转换法线向量至世界坐标系
				bump = normalize(half3(dot(i.TtoW0.xyz, bump), dot(i.TtoW1.xyz, bump), dot(i.TtoW2.xyz, bump)));
				fixed3 reflDir = reflect(-worldViewDir, bump);
				fixed4 texColor = tex2D(_MainTex, i.uv.xy);
                // 对 Cubemap 采样获得反射颜色
				fixed3 reflCol = texCUBE(_Cubemap, reflDir).rgb * texColor.rgb;
				
				fixed3 finalColor = reflCol * (1 - _RefractAmount) + refrCol * _RefractAmount;
				
				return fixed4(finalColor, 1);
            }
            ENDCG
        }
    }
	FallBack "Diffuse"
}

insert image description here
In the previous implementation, we used a string in GrabPass to indicate the texture whose name the captured screen image will be stored in. In fact, GrabPass supports two forms:

  • Use it directly GrabPass { }, and then use directly in the subsequent Pass _GrabTextureto access the screen image. However, when multiple objects in the scene use this form to grab the screen, this method consumes a lot of performance , because for each object that uses it, Unity will perform an expensive screen grab for it alone. fetch operation. But this method allows each object to get a different screen image , depending on their render queue and the colors in the current screen buffer when rendering them
  • use GrabPass { "TextureName" },. Using this method can also grab the screen, but Unity will only perform a screen grabbing operation for the first object that uses the texture named TextureName every frame, and this texture can also be accessed in other Passes . This method is more efficient because Unity will only perform the grab once per frame regardless of how many objects in the scene use this command, but it also means that all objects will use the same screen image . However, in most cases this is enough.

10.2.3 Render Textures vs. GrabPass

The advantage of GrabPass is that it is easy to implement. We only need to write a few lines of code in Shader to achieve the purpose of grabbing the screen.
But in terms of efficiency, the efficiency of using rendering textures is often better than GrabPass, especially on mobile devices. Using rendering textures, we can customize the size of rendering textures. Although this method requires rendering part of the scene again, we can reduce the size of the scene during secondary rendering by adjusting the rendering layer of the camera, or use other methods to control the camera. Whether it needs to be turned on. However, the image resolution obtained by using GrabPass is consistent with the display screen, which means that it may cause serious bandwidth impact on some high-resolution devices. And on mobile devices, although GrabPass will not re-render the scene, it often requires the CPU to directly read the data in the back buffer (back buffer), which destroys the parallelism between the CPU and GPU, which is time-consuming. It's not even supported on some mobile devices.
Unity introduced 命令缓冲 (Command Buffers)to allow us to extend Unity's rendering pipeline. Using the command buffer, we can also get a screen capture effect. It can copy the current image to a temporary render target texture after the opaque object is rendered, and then perform some additional operations there, such as blurring, etc., and finally put the image Pass it to the object that needs to use it for processing and display. In addition, the command buffer also allows us to achieve many special effects, readers can find more content in the official Unity manual .

10.3 Procedural textures

程序纹理 (Procedural Texture)Refers to those computer-generated images, we usually use some specific algorithm to create personalized patterns or very real natural elements, such as wood, stones, etc. The advantage of using procedural textures is that we can use various parameters to control the appearance of the texture, and these properties are not only those color properties, but can even be completely different types of pattern properties, which allows us to get richer animations and visual effects .

10.3.1 Implementing simple procedural textures in Unity

Below we will use an algorithm to generate a polka dot texture.
First import the open source plug-in https://github.com/LMNRY/SetProperty, which is used to change properties more conveniently in edit mode.
Then we create a new script ProceduralTextureGeneration for generating procedural textures

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

// 在编辑器模式下运行
[ExecuteInEditMode]
public class ProceduralTextureGeneration : MonoBehaviour
{
    
    
    public Material material = null;
    private Texture2D m_generatedTexture = null;
    
    // 纹理的大小
    [SerializeField, SetProperty("textureWidth")]
    private int m_textureWidth = 512;
    public int textureWidth {
    
    
        get {
    
    
                return m_textureWidth;
        }
        set {
    
    
            m_textureWidth = value;
            _UpdateMaterial();
        }
    }

    // 背景颜色
    [SerializeField, SetProperty("backgroundColor")]
    private Color m_backgroundColor = Color.white;
    public Color backgroundColor {
    
    
        get {
    
    
            return m_backgroundColor;
        }
        set {
    
    
            m_backgroundColor = value;
            _UpdateMaterial();
        }
    }

    // 圆点的颜色
    [SerializeField, SetProperty("circleColor")]
    private Color m_circleColor = Color.yellow;
    public Color circleColor {
    
    
        get {
    
    
            return m_circleColor;
        }
        set {
    
    
            m_circleColor = value;
            _UpdateMaterial();
        }
    }

    // 模糊因子,这个参数是用来模糊圆形边界的
    [SerializeField, SetProperty("blurFactor")]
    private float m_blurFactor = 2.0f;
    public float blurFactor {
    
    
        get {
    
    
            return m_blurFactor;
        }
        set {
    
    
            m_blurFactor = value;
            _UpdateMaterial();
        }
    }

    //  混合颜色
    private Color _MixColor(Color color0, Color color1, float mixFactor) {
    
    
		Color mixColor = Color.white;
		mixColor.r = Mathf.Lerp(color0.r, color1.r, mixFactor);
		mixColor.g = Mathf.Lerp(color0.g, color1.g, mixFactor);
		mixColor.b = Mathf.Lerp(color0.b, color1.b, mixFactor);
		mixColor.a = Mathf.Lerp(color0.a, color1.a, mixFactor);
		return mixColor;
	}

    void _UpdateMaterial()
    {
    
    
        Debug.Log("Update Material");
        if (material != null) {
    
    
            m_generatedTexture = _GenerateProceduralTexture();
            material.SetTexture("_MainTex", m_generatedTexture);
        }
    }

    private Texture2D _GenerateProceduralTexture() 
    {
    
    
        Texture2D proceduralTexture = new Texture2D(textureWidth, textureWidth);
        // 定义圆与圆之间的间距
        float circleInterval = textureWidth / 4.0f;
        // 定义圆的半径
        float radius = textureWidth / 10.0f;
        // 定义模糊系数
        float edgeBlur = 1.0f / blurFactor;

        for (int w = 0; w <textureWidth; w++) {
    
    
            for (int h = 0; h <textureWidth; h++) {
    
    
                // 使用背景颜色进行初始化
                Color pixel = backgroundColor;

                // 依次画9个圆
                for (int i = 0; i< 3; i++) {
    
    
                    for (int j = 0; j < 3; j++) {
    
    
                        // 计算当前所绘制的圆的圆心位置
                        Vector2 circleCenter = new Vector2(circleInterval * (i + 1), circleInterval   
                        * (j + 1));

                        // 计算当前像素与圆心的距离
                        float dist = Vector2.Distance(new Vector2(w, h), circleCenter) - radius;

                        // 模糊圆的边界
                        Color color = _MixColor(circleColor, new Color(pixel.r, pixel.g,   
                        pixel.b, 0.0f), Mathf.SmoothStep(0f, 1.0f, dist * edgeBlur));

                        // 与之前得到的颜色进行混合
                        pixel = _MixColor(pixel, color, color.a);
                    }
                }

                proceduralTexture.SetPixel(w, h, pixel);
            }
        }

        proceduralTexture.Apply();
        return proceduralTexture;
    }

    void Start()
    {
    
    
        if (material == null) {
    
    
            Renderer renderer = gameObject.GetComponent<Renderer>();
            if (renderer == null) {
    
    
                Debug.LogWarning("Cannot find a renderer.");
                return;
            }

            material = renderer.material;
        }
        _UpdateMaterial();
    }
}

Create a new cube and mount this script to adjust the texture of the cube in edit mode, the effect is as follows:
insert image description here

10.3.2 Unity's procedural materials

In Unity, there is a class of materials that specifically use procedural textures, called 程序材质 (Procedural Materials) . Procedural materials and the procedural textures it uses are not created in Unity, but are generated Substance Designeroutside of Unity using a software called . These materials are .sbsarall suffixed, and we can directly drag these materials into the Unity project like other resources. When these files are imported into Unity, Unity will generate one 程序纹理资源 (Procedural Material Asset) .
A big reason for the power of procedural textures is its variability. We can control the appearance of textures by adjusting the properties of procedural textures, and even generate seemingly completely different textures.
insert image description here

Guess you like

Origin blog.csdn.net/m0_37979033/article/details/130101416