Detailed introduction to Unity_Shader

1. Create Shader

Create Shader in Unity, including Stander Shader, Unlit Shader, Image Effect Shader, Compute Shader, Ray Tracing Shader

Stander Shader : Unity's built-in standard shader supports features such as high light, transparency, and normal maps, such as metal, plastic, wood, skin, and also supports lighting, shadows, reflections, refraction, transparent atomization, etc...

Unlit Shader : A shader that is not affected by lighting, suitable for simple 2d, 2d games, or performance optimization, such as commonly used in UI, examples, lines, relatively lightweight, and more used in mobile devices

Image Effect Shader: Used to post-process the picture after rendering the scene, such as blurring, color adjustment, edge detection and other effects

Compute Shader: A shader that runs on the GPU and can be used for high-performance computing tasks

Ray Tracing Shader: A shader that implements light tracing, achieving high-quality reflection, refraction, shadows, etc.

where the first two shaders are widely used

Two, write

Create a new Standard Shader, analyze in chunks

Shader "Custom/MyFirstShader"
{
    //Properties声明了着色器的属性变量。这个着色器有四个属性,分别是颜色,二维纹理,光泽度,金属度
    Properties
    {
        //颜色属性
        _Color ("Color", Color) = (1,1,1,1)
        //二维纹理属性
        _MainTex ("Albedo (RGB)", 2D) = "white" {}
        //光泽度
        _Glossiness ("Smoothness", Range(0,1)) = 0.5
        //金属质感
        _Metallic ("Metallic", Range(0,1)) = 0.0
    }

     //这是一个子着色器,它定义了渲染该着色器时使用的材质类型和LOD
    SubShader
    {
        
        Tags { "RenderType"="Opaque" }

        //定义了该子着色器的LOD级别,它决定了在渲染远处的物体时使用的着色器质量。
        LOD 200


        //这个块包含了使用Cg/HLSL编写的着色器程序
        CGPROGRAM

        // 该段指定使用标准的表面着色器,并开启去所有类型光源的阴影
        #pragma surface surf Standard fullforwardshadows

        // 着色器模型3.0
        #pragma target 3.0

         //定义一个采样器,用于采样该着色器主纹理
        sampler2D _MainTex;

        //定义了一个输入结构体,它包含了从几何体传递到着色器的数据。
        struct Input
        {
            float2 uv_MainTex;
        };

        half _Glossiness;

        half _Metallic;

        fixed4 _Color;

        // 这两行代码包含了一个实例缓冲区,允许使用着色器来渲染大量的实例
        UNITY_INSTANCING_BUFFER_START(Props)
        UNITY_INSTANCING_BUFFER_END(Props)

        //这个函数使用传入的几何体数据和着色器变量来计算表面的属性输出
        void surf (Input IN, inout SurfaceOutputStandard o)
        {
            // 这行代码从主纹理中采样颜色值,并乘以着色器中定义的颜色值
            fixed4 c = tex2D (_MainTex, IN.uv_MainTex) * _Color;

            //设置表面的属性输出,包括反射率、光泽度、金属度和透明度
            o.Albedo = c.rgb;
            o.Metallic = _Metallic;
            o.Smoothness = _Glossiness;
            o.Alpha = c.a;
        }
        ENDCG
    }
    //如果当前着色器不能被使用,就会使用Unity默认的"Diffuse"着色器来代替。这样可以确保在任何情况下都能够正常渲染场景。
    FallBack "Diffuse"
}

It can be seen that there are mainly two parts

1: Properties (properties)

A Shader can contain multiple properties, which can be exposed in the Inspector window. Common properties include:

  • _Color : Color attribute, which can be used to define the basic color of the material

In Unity, colors can be represented by four letters RGBA, each value is between 0-1

R: intensity of red

G: Intensity of green

B: Intensity of blue

A: Transparency, 0-1 gradually opaque 

  • _MainTex: Texture attribute, which can be used to define the main texture of the material. The common texture maps are as follows

BumpMap: Set the normal map

MatallicGlossMap: Set metallic and smoothness maps

ParallaxMap: Parallax Mapping

OcclusionMap: Used to set the occlusion map.

EmissionMap: Used to set the emission map.

DetailMask: A mask map for setting detail textures.

Take MainTex as an example. In this example, our main texture is set as follows:

 _MainTex ("Texture", 2D) = "white" {}

Albedo (RGB): Albedo represents the basic color of the object, RGB means that the texture contains the color information of the RGB channel

2D means this is a 2D texture

white is the default value for textures

  • _Glossiness: Smoothness attribute, which can be used to control the surface smoothness of the material

Glossiness is an attribute used to control the glossiness of an object's surface, with a value ranging from 0 to 1, with 0 being very rough (matte) and 1 being very smooth (high gloss).

  • _Metallic: metalness attribute, which can be used to control the metalness of the material

Both metalness and glossiness can be adjusted in the range of 0-1

  • _BumpMap: The bump map attribute, which can be used to define the bump level of the material
    _BumpMap ("Normal Map", 2D) = "bump" {}
    _BumpScale ("Normal Scale", Range(0, 1)) = 1.0

  • _Parallax: Parallax map attribute, which can be used to define the parallax effect of the material
        _ParallaxMap ("Parallax Map", 2D) = "white" {}
        _Parallax ("Parallax", Range(0, 0.2)) = 0.05

  • _EmissionColor: Self-illumination color attribute, which can be used to define the self-illumination color of the material
_EmissionColor ("Emission Color", Color) = (0, 0, 0, 0)

  • _RimColor: The edge color attribute, which can be used to define the color of the edge of the material
  • _RimPower: edge brightness attribute, which can be used to control the brightness of the edge of the material
        _RimPower ("Rim Power", Range(0.1, 10.0)) = 3.0
        _RimColor ("Rim Color", Color) = (1,1,1,1)

Add RimPower and RimColor in Properties to control the color and brightness of the edge

After adding the edge attribute in Properties, it also needs to be used in the sub-shader

  • _OutlineColor: Outline color attribute, which can be used to define the outline color of the material
  • _OutlineWidth: Outline width attribute, which can be used to define the outline width of the material

For example, they can be set like this:

Properties
    {
        // 设置轮廓宽度
        _OutlineWidth ("Outline Width", Range(0, 0.1)) = 0.01
        // 设置轮廓颜色
        _OutlineColor ("Outline Color", Color) = (0,0,0,1)
    }

represents the width and color of the outline

2: SubShader (Child Colorer)

In the Shader file, some properties declared in Properties also need to be implemented in sub-shaders (such as textures and colors), while metal and smoothness do not, because they have been implemented by default.

The sub-shader is accessed through the CG program, which is the built-in shader language of unity for writing GPU programs

In a sub-shader, we use both CG/HLSL language to write

Similarly, a Shader can also contain multiple sub-shaders

Let's take an example of edge lighting to explain in detail the role of the sub-shader

Shader "Custom/Outline" {
    Properties {
        _OutlineColor ("Outline Color", Color) = (1,1,1,1)
        _OutlineWidth ("Outline Width", Range(0, 0.1)) = 0.01
    }

    SubShader {
        Tags { "RenderType"="Opaque" }

        Pass {
            CGPROGRAM
            #pragma vertex vert
            #pragma fragment frag

            float4 _OutlineColor;
            float _OutlineWidth;

            struct appdata {
                float4 vertex : POSITION;
            };
            struct v2f {
                float4 vertex : SV_POSITION;
            };

            v2f vert (appdata v) {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);

                // Expand the vertex position by the outline width in screen space
                o.vertex.xy += _OutlineWidth * o.vertex.w * UnityObjectToClipPos(float4(1, 1, 0, 0));

                return o;
            }

            fixed4 frag (v2f i) : SV_Target {
                // If the pixel is not on the edge, make it transparent
                if (ddx(i.vertex.xy) != 0 || ddy(i.vertex.xy) != 0) {
                    discard;
                }

                // Otherwise, set the pixel color to the outline color
                return _OutlineColor;
            }
            ENDCG
        }
    }

    FallBack "Diffuse"
}

In this code, we define two properties , one is the depth of the edge, and the other is the color of the edge

Then Tag selects RenderType (rendering type) . In fact, we have many kinds of Tags to choose from in Shader, and multiple choices can be made in Tags, for example:

Tags {
    "Queue"="Transparent"
    "RenderType"="Opaque, MyRenderType"
}

There are many kinds of tags, common tags such as:

  Rendering type (RenderType), rendering queue (Queue), lighting mode (Lighting), rendering state (RenderState)

And RanderType has the following types:

Opaque: for opaque objects and materials

Transparent: Used for transparent objects and materials.

TransparentCutout: Used for objects with transparency, but its transparency can be cut off by a threshold.

Background: Used for background images, generally opaque.

Overlay: Used to overlay materials, usually used to add halo or other effects.

TreeOpaque: Used for opaque parts of tree objects.

TreeTransparentCutout: Used for the transparent part of the tree object.

TreeBillboard: Billboard section for tree objects.

Queue is to specify the drawing order and priority, as follows:

BackGround: the background, usually the sky and or other background elements

Geometry: General geometric objects, including most objects in the scene

AlphaTest, Transparent, Overley (overlay)

Lighting represents the rendering order, with pre-rendering and delayed rendering

Forward rendering is that each object is rendered separately, and each pixel is illuminated multiple times, suitable for mobile and low-end PCs

Deferred Rendering means that all aggregates will be rendered once first, and then the lighting information is stored in the G-buffer, and then the lighting calculation is performed on the G-buffer. This method requires a higher GPU

RenderState represents the state of rendering, as follows

Cull: cull state, three cull states of Back, Front, and Off are optional

ZTest: Depth test status, optional Less, Greater, LEqual

ZWrite: depth writing status, optional On/Off

Blend: The mixed state can choose Zero, One, SrcColor, EstColor, etc.

Offset: offset state, optional Factor, Units

Then in the sub-shader, we define a Pass . It can be said that Pass is a bridge for us to communicate with the GPU. It contains a set of shader programs, rendering status and other parameters, representing a rendering operation.

首先:#pragma vertex vert
           #pragma fragment frag

These two instructions are used to specify which functions should be used as vertex shaders and than pixel shaders

The vertex shader calculates the final position and texture information for each vertex, color

The pixel shader calculates the final color value for each pixel

In addition to vertex shaders and pixel shaders, there are geometry shaders (Geometey shader), hull shader (hull shader), subdivision shader (Domain shader), calculation shader (Compute shader)

The declaration method is #pragma + shader name + own name

Next: float4 _OutlineColor;
           float _OutlineWidth;

That is, the attribute value declared in the attribute is used in the shader here, so it needs to be declared here

then:

struct appdata {
                float4 vertex : POSITION;
            };

Declare a structure called appdate, which contains a float4 variable named vertex. Position indicates that this variable is the information position of the vertex . In the actual vertex data, there may not only be position, but also normal , colors, etc., can be declared in this structure, for example:

struct appdata {
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 color : COLOR;
};

then:

struct v2f {
                float4 vertex : SV_POSITION;
            };

This structure is similar to the previous one, but here SV_POSITION represents the screen space position of the vertex calculated by the vertex shader

continue:

v2f vert (appdata v) {
                v2f o;
                o.vertex = UnityObjectToClipPos(v.vertex);
                o.vertex.xy += _OutlineWidth * o.vertex.w * UnityObjectToClipPos(float4(1, 1, 0, 0));

                return o;
            }

First of all, this is a vertex shader function named vert, whose return value is v2f. The accepted parameter is the appdate type. We defined an appdate structure before, so the parameter is the stored vertex data.

First, the v2f type structure o calculated by the vertex shader is instantiated. It UnityObjectToClipPos()is a Unity function that converts a point in the object space into a point in the clipping space. Here we use this function to convert the incoming vertex position Converts from object space to clip space and saves it o.vertexin as vert()the return value of the function.

#Here we want to declare a point, why do we need to perform space clipping ?

In 3D graphics, we need to project the vertices in the three-dimensional coordinate system to a two-dimensional plane. This process is called projection. In graphics, clip space is usually used for projection. For example, clip space is actually A viewing frustum similar to a camera lens. The apex of this viewing frustum is the position of the camera. When an object in the scene enters this viewing frustum, it will be rendered on the screen.

last step:

fixed4 frag (v2f i) : SV_Target {
                // If the pixel is not on the edge, make it transparent
                if (ddx(i.vertex.xy) != 0 || ddy(i.vertex.xy) != 0) {
                    discard;
                }

                // Otherwise, set the pixel color to the outline color
                return _OutlineColor;
            }
            ENDCG

This is a function called frag. The parameter is the v2f structure processed by the vertex shader, that is, the screen space position of the vertex after space clipping. The return value is a data type, which contains a vector of four floating point fixed4 numbers . Representing data such as colors and coordinates with point numbers instead of floating point numbers can improve performance and precision because fixed point numbers can be processed faster in hardware than floating point numbers and avoid some precision errors.

: SV_Target means, output the color to the current rendering target.

if (ddx(i.vertex.xy) != 0 || ddy(i.vertex.xy) != 0) {
                    discard;
                }

This code is used to judge whether the pixel is on the edge. If not, the pixel is not drawn through discard, that is, only the edge is drawn. ddx and ddy are built-in functions of HLSL, which are used to calculate the gradient of the pixel position

Supongo que te gusta

Origin blog.csdn.net/leikang111/article/details/130529044
Recomendado
Clasificación