Basic knowledge of Unity writing Shader

Back to Contents

Hello everyone, I am Zhao.
Here, by writing the simplest shader by hand, I will introduce some basic knowledge of writing Shader in Unity.

1. Shader basic structure

Create a new shader, delete all the content in it, and then enter the following content

shader "testShader"
{
	Properties
	{

	}
	SubShader
	{
		Pass
		{

		}
	}
}

It can be found that the Shader is already working now. Create a new shader, use the shader you just wrote, and then assign it to a Cube. You can see that the Cube is displayed normally.
insert image description here

It’s just that the color of the Cube is pure white at this time, and there is no light and shadow effect.
Analyze the Shader structure above, and you can see that,
1. The word shader at the beginning is followed by a string shader “testShader”, and the testShaderr is the Shader’s Name, this name is used for Shader search, you can use a slash as a directory, such as shader "azhao/testShader"
2. Properties structure, here is the property that declares the Shader's external display
3. SubShader structure, sub-renderer, a Shader Can contain multiple SubShaders, at least one
4, Pass structure, semantic block, usually a Pass represents a rendering. A SubShader can contain multiple Pass, at least one Pass

2. External properties

The properties written in the Properties structure are used to expose the properties modified by the player
Type:

1. Number type

(1) Float: floating point
(2) Int: integer
(3) Range: range of values

2. Quaternions and colors

(1)Color
(2)Vector
is expressed in the form of (number,number,number,number)

3. Texture class

(1) 2D
(2) Cube
(3) 3D
are all represented by "default color" {}
Code example:

Properties
{ _floatVal("floating point variable", Float) = 0 _intVal("integer variable", Int) = 1 _rangeVal("numeric range", Range(0,1)) = 0


_col("颜色",Color) = (1,1,1,1)
_vectVal("四元数",Vector) = (0,0,0,0)

_2dTex("2D贴图",2D) = "white"{}
_cubeTex("Cube贴图",Cube) = "green"{}
_3dTex("3D贴图",3D) = "black"{}

}

Effect:
insert image description here

It can be seen that the composition of a line of attributes is: variable name ("display name", type) = default value

If the defined variable is to be used in Pass, it needs to be declared again in Pass with the same name as in Properties. This is explained in the following CG code

3. CG code and various definitions

1. CGPROGRAM structure

Currently mainstream shader languages ​​include HLSL, GLSL, and Cg. The three also have a lot in common in grammar, just choose one to learn. And Unity chose Cg as the shader language. If you want to write Cg code, you need to define the start and end positions of a cg code in Pass, start with CGPROGRAM, end with ENDCG, and the cg code is in the middle
insert image description here

2. Vertex and fragment programs

After adding the start and end tags of the CG code without content in the previous step, you will find that there is a problem with the shader, and Unity gives a warning
insert image description here

The first warning probably means that the current Shader is not supported, and no sub-renderer or fallback is supported. This is because our CG code is partially incomplete, and Unity does not know what we want to do.
The second warning is that we use CG code, but do not specify vertex and fragment programs. The
method of defining vertex and fragment programs is:
#pragma vertex vertex program name
#pragma fragment name of fragment program
such as me Write it like this:

Pass
{
	CGPROGRAM
	#pragma vertex azhaoVert
	#pragma fragment azhaoFrag
	ENDCG
}

At this time, the shader still reports an error
insert image description here

This is because we declared the vertex and fragment programs but did not implement them.
The specific writing methods of these two programs will be explained below.

To make a digression: Combined with the rendering pipeline process mentioned earlier, we can know that a basic shader should include a vertex fragment program, so the vertex fragment program is the most basic and core way to write Shader. Unity also provides other Form Shader writing methods, such as Surface, in fact, I have not always recommended to use it at the beginning, because those are the forms that Unity re-encapsulates on the basis of the basic vertex fragment program, it may be very convenient to use, You can see good effects by directly specifying values ​​such as ambient color and normal, but in fact, why a shader can have specific effects, I personally feel that it can be better reflected in the vertex fragment program.

3. Reference library

Unity has prepared many things for us, such as some commonly used function methods, commonly used constants, converted matrices, etc. We can also write some of our own functions for direct use in the future. These already written things are generally placed in the cginc file. There is a library containing many commonly used functions called UnityCG.cginc which
can be imported by #include "filename.cginc".

Pass
{
	CGPROGRAM
	#pragma vertex azhaoVert
	#pragma fragment azhaoFrag
	#include "UnityCG.cginc"
	ENDCG
}

4. Declare available variables

Some external variables were declared at the beginning of the shader before

Properties
{ _floatVal("floating point variable", Float) = 0 _intVal("integer variable", Int) = 1 _rangeVal("numeric range", Range(0,1)) = 0


_col("颜色",Color) = (1,1,1,1)
_vectVal("四元数",Vector) = (0,0,0,0)

_2dTex("2D贴图",2D) = "white"{}
_cubeTex("Cube贴图",Cube) = "green"{}
_3dTex("3D贴图",3D) = "black"{}

}

In order to use these variables in the CG program, we need to declare them again in the CG program

Pass
{
	CGPROGRAM
	#pragma vertex azhaoVert
	#pragma fragment azhaoFrag
	#include "UnityCG.cginc"

	float _floatVal;
	float _intVal;
	float _rangeVal;
	float4 _col;
	float4 _vectVal;
	sampler2D _2dTex;
	float4 _2dTex_ST;
	samplerCUBE _cubeTex;
	sampler3D _3dTex;

	ENDCG
}

Here are a few points to note:
1. Regardless of whether the variable is declared externally as float, int, Vector, or color, in actual use, there is only float, and the difference is only in how many dimensions float is, such as float, float2, float3, float4 .
2. When float3 needs to be completed to float4, if the original float3 is a coordinate, then add 1 at the end, and if the original float3 is a vector, then add 0 at the end. 3. The variables of texture type are all samplers, the difference
is So sampler2D, samplerCUBE, sampler3D
4. Texture type variables, if you want to read the tiling and scaling parameters on the shader, you need to define a float4, the name is the texture variable name plus _ST, such as the above _2dTex_ST
insert image description here

The xy of this ST variable represents Tiling, and zw represents Offset

4. Vertex program

A vertex program is a program with input and output structures, so before writing a vertex program, it is necessary to define the input and output structures.

1. Input structure

In general, the name of the structure that defines the input vertex program is appdata, which represents the data that can be obtained directly from the model, such as

	struct appdata
	{
		float4 pos : POSITION;//顶点坐标
		float2 uv : TEXCOORD0;//uv1
		float2 uv2 : TEXCOORD1;//uv2
		float2 uv3 : TEXCOORD2;//uv3
		float2 uv4 : TEXCOORD3;//uv4
		float3 normal : NORMAL;//法线
		float4 tangent:TANGENT;//切线
		float4 color:COLOR;//顶点颜色
	};

These data are generally directly available on the model, but they may not necessarily have value. For example, the model may not have uv2-uv4 information, no vertex color, and so on. It should be noted that TEXCOORD here is a register for uv coordinates, and supports up to 4 sets of UVs.
After referencing UnityCG.cginc, you can directly use some defined input structures:

struct appdata_base {
    float4 vertex : POSITION;
    float3 normal : NORMAL;
    float4 texcoord : TEXCOORD0;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct appdata_tan {
    float4 vertex : POSITION;
    float4 tangent : TANGENT;
    float3 normal : NORMAL;
    float4 texcoord : TEXCOORD0;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

struct appdata_full {
    float4 vertex : POSITION;
    float4 tangent : TANGENT;
    float3 normal : NORMAL;
    float4 texcoord : TEXCOORD0;
    float4 texcoord1 : TEXCOORD1;
    float4 texcoord2 : TEXCOORD2;
    float4 texcoord3 : TEXCOORD3;
    fixed4 color : COLOR;
    UNITY_VERTEX_INPUT_INSTANCE_ID
};

But I still like to define it myself, because I can increase or decrease the parameters that need to be read according to my own needs

2. Output structure

The name of the output structure is generally named v2f, which means vertex to fragment. Because this structure is the output of the vertex program and also the input data of the fragment program.

	struct v2f
	{
		float4 pos:SV_POSITION;
		float4 col:COLOR;
		float2 val1:TEXCOORD0;
		float3 val2:TEXCOORD1;
		float4 val3:TEXCOORD2;
		//……
	};

There are two points to note here:
1. The pos definition type of the fragment program is SV_POSITION, while the coordinates defined by the vertex program are POSITION, and SV_POSITION is the fixed coordinate usage type of the input fragment program.
2. TEXCOORD in the fragment program no longer represents UV coordinates. Each TEXCOORD register is a Vector4, which can store any custom data you want. For example, we can write it like this, use TEXCOORD0 to save the uv coordinates, TEXCOORD1 to save the world space normal, and TEXCOORD2 to save the world space tangent.

	struct v2f
	{
		float4 pos:SV_POSITION;
		float4 col:COLOR;
		float2 uv:TEXCOORD0;
		float3 worldNormal:TEXCOORD1;
		float4 worldTangent:TEXCOORD2;

	};

3. Write Vertex Program

	v2f azhaoVert(appdata i)
	{
		v2f o;
		//模型顶点坐标转世界空间坐标
		float4 worldPos = mul(unity_ObjectToWorld, i.pos);
		//世界空间顶点坐标转观察空间坐标
		float4 viewPos = mul(UNITY_MATRIX_V, worldPos);
		//观察空间坐标转裁剪空间坐标
		float4 clipPos = mul(UNITY_MATRIX_P, viewPos);
		o.pos = clipPos;
		o.col = i.color;
		o.uv = i.uv*_2dTex_ST.xy+ _2dTex_ST.zw;
		o.worldNormal = UnityObjectToWorldNormal(i.normal);
		o.worldTangent = UnityObjectToWorldDir(i.tangent);
		return o;
	}

Explanation:
1. The vertex program uses the appdata structure as input data, and the v2f structure as output data. Personally, I am used to naming the variable of appdata as i (input) and the variable of v2f as o (output).
2. The vertex program mainly controls and changes the vertex position of the model. The example expresses how a vertex coordinate is transformed from the model coordinate space to the clipping space in a tedious way. If we need to modify the vertex coordinates, we can Modify in one of the steps, if there is no need to modify, it can be abbreviated as

float4 clipPos = mul(UNITY_MATRIX_MVP, i.pos);

or

float4 clipPos = UnityObjectToClipPos(i.pos );

3. How many variables are defined in the v2f structure, so how many variables must be assigned in the vertex program.
4. In order to use the tiling and offset of uv, I put _2dTex_ST into it for calculation, which can actually be abbreviated as

o.uv = TRANSFORM_TEX(i.uv,_2dTex);

5. Fragment program

Write the simplest fragment program:

	half4 azhaoFrag(v2f o) : SV_Target
	{
		return half4(1,1,1,1);
	}

This fragment program uses v2f as the input data structure, half4 as the output data, and SV_Target is the semantics used by DX10+ for the color output of the fragment function shader.
Since this is just a format, the v2f structure does not participate in the calculation, and directly returns a half4(1,1,1,1) color.
If we write in more detail, such as using some texture parameters and color parameters just now, the fragment program can be written as follows:

half4 azhaoFrag(v2f o) : SV_Target
			{
				half4 texCol = tex2D(_2dTex,o.uv);
				half3 finalCol = texCol.rgb*_col.rgb;
				return half4(finalCol, texCol.a);
			}

Here tex2D is used to sample the 2D texture, obtain its color value, multiply it with the previously defined _col color value, and then return the result. In this way, we can control the appearance of the model through textures and colors.
After the above writing, a relatively complete shader is written, and now the entire shader code is like this:

shader "testShader"
{
	Properties
	{
		_floatVal("浮点变量",Float) = 0
		_intVal("整型变量",Int) = 1
		_rangeVal("数值范围",Range(0,1)) = 0

		_col("颜色",Color) = (1,1,1,1)
		_vectVal("四元数",Vector) = (0,0,0,0)

		_2dTex("2D贴图",2D) = "white"{}
		_cubeTex("Cube贴图",Cube) = "green"{}
		_3dTex("3D贴图",3D) = "black"{}
	}
	SubShader
	{
		Pass
		{
			CGPROGRAM
			#pragma vertex azhaoVert
			#pragma fragment azhaoFrag
			#include "UnityCG.cginc"

			float _floatVal;
			float _intVal;
			float _rangeVal;
			float4 _col;
			float4 _vectVal;
			sampler2D _2dTex;
			float4 _2dTex_ST;
			samplerCUBE _cubeTex;
			sampler3D _3dTex;

			struct appdata
			{
				float4 pos : POSITION;//顶点坐标
				float2 uv : TEXCOORD0;//uv1
				float2 uv2 : TEXCOORD1;//uv2
				float2 uv3 : TEXCOORD2;//uv3
				float2 uv4 : TEXCOORD3;//uv4
				float3 normal : NORMAL;//法线
				float4 tangent:TANGENT;//切线
				float4 color:COLOR;//顶点颜色
			};

			struct v2f
			{
				float4 pos:SV_POSITION;
				float4 col:COLOR;
				float2 uv:TEXCOORD0;
				float3 worldNormal:TEXCOORD1;
				float3 worldTangent:TEXCOORD2;

			};

			v2f azhaoVert(appdata i)
			{
				v2f o;
				//模型顶点坐标转世界空间坐标
				float4 worldPos = mul(unity_ObjectToWorld, i.pos);
				//世界空间顶点坐标转观察空间坐标
				float4 viewPos = mul(UNITY_MATRIX_V, worldPos);
				//观察空间坐标转裁剪空间坐标
				float4 clipPos = mul(UNITY_MATRIX_P, viewPos);
				o.pos = clipPos;
				o.col = i.color;
				o.uv = i.uv*_2dTex_ST.xy+ _2dTex_ST.zw;
				o.worldNormal = UnityObjectToWorldNormal(i.normal);
				o.worldTangent = UnityObjectToWorldDir(i.tangent);
				return o;
			}

			half4 azhaoFrag(v2f o) : SV_Target
			{
				half4 texCol = tex2D(_2dTex,o.uv);
				half3 finalCol = texCol.rgb*_col.rgb;
				return half4(finalCol, texCol.a);
			}

			ENDCG
		}
	}
}

6. The problem of variable precision

For numeric variables, there are 3 types of precision that can be used, the distribution is float, half, fixed, and half and float are used in the above example. The difference between them is only the precision. Other usages are the same. For example, when expressing multi-dimensional, it can be float4, half4, fixed4.
Let’s introduce them separately:
1. float: 32-bit floating-point number with the highest precision, generally used for world coordinate calculations and variables that require high-precision calculations
2. half: 16 bits, the value range is [-60000, +60000], precision It is 3 decimal places, generally used for local coordinates, direction vectors, HDR colors, etc.
3. fixed: 11 digits, the value range is [-2, +2], and the progress is 1/256, generally used for color and low-precision calculations wait.
It should be noted that fixed is rarely used now. In many mobile phone hardware, the accuracy of half and fixed is actually the same.

7. Backface culling

Cull back culling mode:
Cull Back: default, cull the back
Cull Front: cull the front
Cull Off: do not cull (double-sided display),
just write directly in SubShader or Pass, for example

	SubShader
	{
		Cull Off
		Pass
		{
……

or

SubShader
	{
			Pass
		{
Cull Off
……

This enumeration mode selection can also be exposed in variables for players to choose

	Properties
	{
		[Enum(UnityEngine.Rendering.CullMode)]
		_cullMode("剔除模式",float) = 2
	}
	SubShader
	{
		Cull [_cullMode]
		Pass
		{
……

insert image description here

In this way, the culling mode can be directly selected in the shader. The default value is 2, because the default Back value is 2

8. Transparency test

AlphaTest transparency test, when the same piece of transparency data is written multiple times, a rule is needed to determine which information is finally displayed
1. AlphaTest Off: Do not test, all pass
2. AlphaTest Greater Value: greater than a certain value Transparency can pass
3, AlphaTest GEqual Value: greater than or equal to a certain value to pass
4, AlphaTest Less Value: less than a certain value to pass
5, AlphaTest LEqual Value: less than or equal to a certain value to pass
6, AlphaTest Equal Value: equal to a certain value value to pass
7, AlphaTest NotEqual Value: not equal to a certain value to pass
8, AlphaTest Always: equal to off, all cases pass
9, AlphaTest Never: all cases do not pass
Alternative usage:

AlphaTest GEqual 0.1

Equivalent to

clip(color.a - 0.1f)

9. Translucent Blending

When we need to render translucent blending, we need to do several things:
1. Turn off the depth writing:

ZWrite Off

2. Change the rendering queue to more than 2500:

Tags{"Queue" = "Transparent"}

3. Calculate the alpha value to be controlled between 0-1

saturate(alpha)

4. Use the Blend command to control the blending method
Format: Blend SrcFactor DstFactor
where
SrcFactor is the source factor
DstFactor is the target factor
Blend factors are (for the sake of illustration, not arranged by enumeration):
1.One: use all colors of the source and target
2 .Zero: Remove all colors from source and destination
3.SrcColor: Multiply by source color value
4.SrcAlpha: Multiply by source alpha value
5.SrcAlphaSaturate: Multiply by source alpha value (0-1)
6.DstColor: Multiply Take the target color value
7.DstAlpha: multiply by the target alpha value
8.OneMinusSrcColor: multiply by (1-source color value)
9.OneMinusSrcAlpha: multiply by (1-source alpha value)
10.OneMinusDstColor: multiply by (1-target color value)
11.OneMinusDstAlpha: multiplied by (1-target alpha value)
Common blending effects:
1. Traditional translucency

Blend SrcAlpha OneMinusSrcAlpha

2. Premultiplied translucency

Blend One OneMinusSrcAlpha

3. Overlay

Blend One One

4. Soft overlay

Blend OneMinusDstAlpha One
Blend SrcAlpha One

5. Doubling

Blend DstColor Zero

6. 2x multiplication

Blend DstColor SrcColor

Like the elimination mode introduced before, these enumerations can also be selected by users in the exposed properties
insert image description here

10. Summary:

At the end, post the complete shader:

shader "testShader"
{
	Properties
	{
		_floatVal("浮点变量",Float) = 0
		_intVal("整型变量",Int) = 1
		_rangeVal("数值范围",Range(0,1)) = 0

		_col("颜色",Color) = (1,1,1,1)
		_vectVal("四元数",Vector) = (0,0,0,0)

		_2dTex("2D贴图",2D) = "white"{}
		_cubeTex("Cube贴图",Cube) = "green"{}
		_3dTex("3D贴图",3D) = "black"{}
		[Enum(UnityEngine.Rendering.CullMode)]
		_cullMode("剔除模式",float) = 2

		[Enum(UnityEngine.Rendering.BlendMode)]
		_blend1("源因子",float) = 0

		[Enum(UnityEngine.Rendering.BlendMode)]
		_blend2("目标因子",float) = 0
	}
	SubShader
	{		
		Cull[_cullMode]
		ZWrite Off
		Tags{"Queue" = "Transparent"}
		Blend [_blend1] [_blend2]
		Pass
		{			
			CGPROGRAM
			#pragma vertex azhaoVert
			#pragma fragment azhaoFrag
			#include "UnityCG.cginc"

			float _floatVal;
			float _intVal;
			float _rangeVal;
			float4 _col;
			float4 _vectVal;
			sampler2D _2dTex;
			float4 _2dTex_ST;
			samplerCUBE _cubeTex;
			sampler3D _3dTex;

			struct appdata
			{
				float4 pos : POSITION;//顶点坐标
				float2 uv : TEXCOORD0;//uv1
				float2 uv2 : TEXCOORD1;//uv2
				float2 uv3 : TEXCOORD2;//uv3
				float2 uv4 : TEXCOORD3;//uv4
				float3 normal : NORMAL;//法线
				float4 tangent:TANGENT;//切线
				float4 color:COLOR;//顶点颜色
			};

			struct v2f
			{
				float4 pos:SV_POSITION;
				float4 col:COLOR;
				float2 uv:TEXCOORD0;
				float3 worldNormal:TEXCOORD1;
				float3 worldTangent:TEXCOORD2;

			};

			v2f azhaoVert(appdata i)
			{
				v2f o;
				//模型顶点坐标转世界空间坐标
				float4 worldPos = mul(unity_ObjectToWorld, i.pos);
				//世界空间顶点坐标转观察空间坐标
				float4 viewPos = mul(UNITY_MATRIX_V, worldPos);
				//观察空间坐标转裁剪空间坐标
				float4 clipPos = mul(UNITY_MATRIX_P, viewPos);
				o.pos = clipPos;
				o.col = i.color;
				o.uv = i.uv*_2dTex_ST.xy+ _2dTex_ST.zw;
				o.worldNormal = UnityObjectToWorldNormal(i.normal);
				o.worldTangent = UnityObjectToWorldDir(i.tangent);
				return o;
			}

			half4 azhaoFrag(v2f o) : SV_Target
			{
				half4 texCol = tex2D(_2dTex,o.uv);
				half3 finalCol = texCol.rgb*_col.rgb;
				return half4(finalCol, texCol.a);
			}

			ENDCG
		}
	}
}

Guess you like

Origin blog.csdn.net/liweizhao/article/details/130171392