unity shader concepts (learn)

The meaning of Shader

Shader (shader) is used in the field of computer graphics and refers to a set of instructions used by computer graphics resources when performing rendering tasks to calculate the color or shade of an image. But recently, it can also be used to process some special effects or video post-processing. In layman's terms, a shader tells the computer how to draw an object in a specific way.

Programmers apply shaders to the programmable pipeline of a graphics processing unit (GPU) to implement 3D applications. Such a graphics processor is different from traditional fixed pipeline processors and brings higher flexibility and adaptability to GPU programming. The previous inherent pipeline could only perform some geometric transformations and pixel grayscale calculations. Now the programmable pipeline can also handle the position, hue, saturation, brightness, contrast of all pixels, vertices, textures and draw the image in real time. Shaders can also produce effects such as blur, specular, volumetric lights, defocus, cel, posterize, distortion, bump mapping, edge detection, motion detection, and more.

.
Brief Description
A shader is a simple program that describes the characteristics of a vertex or pixel. A vertex shader describes the attributes of a vertex (position, texture coordinates, color, etc.), while a pixel shader describes the characteristics of a pixel (color, z-depth, and alpha value).

Basic graphics pipeline for shaders The basic graphics pipeline
looks like this:
• The central processing unit (CPU) sends instructions (compiled shader programs) and geometry data to the graphics processing unit (GPU) located inside the graphics card.
• Vertex shaders perform geometric transformations.
• If the geometry shader is located on the graphics processor and is activated, it will modify some of the geometry in the scene.
• If the tessellation shader is located on the graphics processor and is activated, the geometry in the scene will be tessellated.
• The calculated geometry is triangulated (divided into triangles).
• Triangles are decomposed into 2 × 2 blocks of pixels.
• Blocks of pixels are modified via fragment shaders.
• Perform a depth test; passing fragments will be written to the screen and possibly blended into the framebuffer.

Vertex transformation → Primitive assembly and rasterization → Fragment texture mapping and shading → Writing to frame buffer

Types of Shaders

Shaders are generally divided into fixed rendering pipelines and programmable rendering pipelines according to pipeline classification. A fixed rendering pipeline is a pipeline with fixed functions, such as the refraction of light on the surface of an object, and the reflection algorithm is fixed and cannot be modified. We can only configure these functions, such as turning on or off reflection effects, fogging effects, etc. Because the function of this pipeline is fixed, it is impossible to give more and freer control over the performance of object details in the program, and it is impossible to achieve more picture effects we want. Therefore, the current graphics cards are all programmable rendering pipelines, that is, those departments that we used to be fixed and cannot be modified can now be modified by programming. After the degree of freedom is high, we can also achieve more special effects we want. Two-dimensional
shader
2D shaders work with digital images, also called textures, whose pixels the shader can modify. 2D shaders can also participate in the rendering of 3D graphics. Currently there is only one type of 2D shader called "Pixel Shader".
Pixel shader
Main article: Pixel shader A
pixel shader, also called a fragment shader, computes the color and other properties of a "fragment", usually a Refers to individual pixels. The simplest pixel shaders have only output color values; complex pixel shaders can have multiple inputs and outputs [4]. Pixel shaders can either always output the same color, or take into account lighting, do bump mapping, generate shadows and highlights, and achieve translucency and other effects. Pixel shaders can also modify the depth of a fragment, and can also output multiple colors for multiple render targets.
In 3D graphics, a single pixel shader cannot achieve very complex effects because it can only process individual pixels and has no information about other geometry in the scene. However, the pixel shader has screen coordinate information, and if the content on the screen is passed in as a texture, it can sample pixels near the current pixel. Using this method, a large number of 2D post-processing effects such as blurring and edge detection can be implemented.
Pixel shaders can also process any 2D images in the middle of the pipeline, including sprites and textures. Therefore, if post-processing after rasterization is required, pixel shaders are the only option.
3D shader
A 3D shader processes a 3D model or other geometry and has access to the colors and textures used to draw the model. Vertex shaders are the earliest 3D shaders; geometry shaders generate new vertices within the shader; and tessellation shaders add detail to a set of vertices.
Vertex Shader
The vertex shader is the most common 3D shader and runs once for each vertex handed to the graphics processor. The purpose is to transform the 3D coordinates of each vertex in the virtual space to the 2D coordinates displayed on the screen (the same is true for the depth value of the depth buffer (Z-Buffer)). The vertex shader can control the position, color, texture coordinates and other properties of the vertices, but it cannot generate new vertices. The output of the vertex shader is passed to the next step in the pipeline. If there is a geometry shader defined later, the geometry shader processes the output data of the vertex shader, otherwise, the rasterizer continues the pipeline tasks.
Geometry Shaders
Geometry Shaders were introduced in OpenGL 3.2 and Direct3D 10, and were previously available in OpenGL 2.0+ with extensions. A different type of shader can generate new graphics primitives, such as points, lines, and triangles. Geometry shaders can add and remove vertices from polygon meshes. It can perform the work of generating geometry and adding model detail that is too taxing for the CPU. Direct3D 10 added support for geometry shader APIs as part of Shader Model 4.0. OpenGL only has geometry shaders available through one of its plugins, but it's very likely that this functionality will be merged in version 3.1. The output of the geometry shader is connected to the input of the rasterizer.
Tessellation shader
As a new class of shaders, tessellation shaders were introduced in OpenGL 4.0 and Direct3D 11, and added two stages to the shader model: tessellation control shader (or shell shader) and surface detail shader Sub-evaluation shader (or domain shader). This allows simpler meshes to be subdivided into finer meshes in real time using specific function calculations. This function can be related to various variables, including the distance from the viewpoint, so that the level of detail can be actively adjusted so that objects closer to the camera have more detail. By adding detail in the shader unit, this also significantly reduces the bandwidth required for the mesh and eliminates the need to downsample the mesh in memory. Some calculation methods can upsample arbitrary meshes, while others allow "hinting" of the vertices and edges of the mesh to highlight.
Primitive and Mesh Shaders
Around 2017, primitive shaders were supported as a new shader stage in the AMD Vega microarchitecture. Similar compute shaders have access to the necessary data to process the geometry.
Ray Tracing Shaders
Ray tracing shaders are supported by Microsoft's DirectX ray tracing, Khronos Group's Vulkan, GLSL and SPIR-V, and Apple's Metal.
Compute Shaders
Compute shaders are not limited to graphics applications, but also programs that use the same general-purpose graphics processing unit execution resources. They may be used in the graphics pipeline, such as additional animation stages or lighting algorithms. Some rendering APIs allow compute shaders to easily share data resources with the graphics pipeline.

The type of shader in Unity
①Fixed function shader: It belongs to the fixed rendering pipeline Shader, which is basically used for the rollback of advanced Shader when the old graphics card cannot be displayed. The ShaderLab language is used, and the syntax is similar to Microsoft's FX files or NVIDIA's CgFX.

②Vertex and Fragment Shader: The most powerful Shader type, which belongs to the programmable rendering pipeline. It uses CG/HLSL syntax.

③Surface Shader: The Shader type recommended by Unity3d, which uses Unity's pre-made lighting model to perform lighting operations. Also uses CG/HLSL syntax

Create a Stander surface shader directly in Unity's project panel. The default generated code is as follows
Insert image description here

Explanation:
Properties {}
Properties{} defines shader properties. The properties defined here will be provided as input to all sub-shaders. The format of attribute definition is as follows

_Name(“Display Name”, type) = defaultValue[{options}]

_Name represents the attribute name, such as Color, MainTex, Glossiness, Metallic, etc.

"Display Name" is the name displayed in the Inspector

type represents attributes:

Color - 一种颜色,由RGBA(红绿蓝和透明度)四个量来定义;
2D - 一张2的阶数大小(256,512之类)的贴图。这张贴图将在采样后被转为对应基于模型UV的每个像素的颜色,最终被显示出来;
Rect - 一个非2阶数大小的贴图;
Cube - 即Cube map texture(立方体纹理),简单说就是6张有联系的2D贴图的组合,主要用来做反射效果(比如天空盒和动态反射),也会被转换为对应点的采样;
Range(min, max) - 一个介于最小值和最大值之间的浮点数,一般用来当作调整Shader某些特性的参数(比如透明度渲染的截止值可以是从0至1的值等);
Float - 任意一个浮点数;

Vector - a four-dimensional number;

Insert image description here

Insert image description here
Insert image description here
This is the basic concept of shader. Interpretation of frame information. Some come from Wikipedia as well as some professional explanations. add link description

Guess you like

Origin blog.csdn.net/Mq110m/article/details/128603601