Unity-URP

1. Simple to use

1 Introduction

  1. SRP: Programmable rendering pipeline
    culling culling
    rendering rendering
    postprocessing postprocessing
  2. URP Universal Programmable Rendering Pipeline

2. Installation

1. New project

You can directly select LWRP (URP) [Universal Rendering Pipeline] to create a project.

2. Upgrade existing projects
  1. Find LWRP (URP) [Universal Rendering Pipeline] in the package manager.
  2. Right-click to create -> Rendering -> Create Render Pipeline Asset (Render Pipeline with U Renderer), and assign it to Project Settings -> Render Pipeline Settings in Graphics.
    In addition, the rendering pipeline needs to be set for Quality.
  3. At this time, the original material of the project will be lost in error. Existing materials need to be upgraded in Edit->Render Pipeline. The path is as follows
    insert image description here
    , but this can only fix a few materials that come with Unity.
3. URP Notes

If URP wants the secondary light source to be able to generate shadows, you need to check Cast Shadows for Additional Lights under the Inspector panel of the URP pipeline. (accept shadows)

2. URP operation process

1. Create RenderPipelineAsset and RendererData

  1. If you use SRP, you need to create a subclass of RenderPipelineAsset, which is UniversalRenderPipelineAsset in URP, and assign it to the project Graphics setting
  2. There is a ScriptableRenderData type in the URP framework for creating Renderers, currently available are UniversalRenderData (used to replace the previous version of ForwardRenderData) and Render2DData
  3. When a resource creates a UniversalRenderPipelineAsset, a UniversalRenderData or Render2DData will be automatically created and linked to it. A RenderPipelineAsset must have at least one RenderData. Multiple RenderPipelineAssets can use the same RenderData.
  4. The RenderPipelineAsset used can be dynamically modified at runtime through GraphicsSettings.renderPipelineAsset
    insert image description here

2. Create Pipeline and Renderer

  1. Engine calls CreatePipeline
  2. UniversalRenderPipelineAsset -> UniversalRenderPipeline
  3. UniversalRenderData -> UniversalRenderer
  4. RenderPipelineAsset & RenderPipelineÆ5 I is the engine c++ class, RenderData Renderer is the c# script class in URР расkаgе
    insert image description here
    insert image description here

3. Changes in cameras and lights

1. Camera
  1. After the renderer is changed to URP, Camera will be automatically brought with UniversalAdditionalCameraData script, and Camera parameters and Built-in. RP is different
  2. The camera can choose the Overlay type. At this time, add it to the Stack of the main camera, and the overlay camera can cover the main camera.
2. Lighting
  1. Light will also be automatically added UniversalAddiitionalLightData component, and the parameters have changed.

4. Principle

The engine will call these interfaces every frame to perform what you want in different stages of rendering
RenderPipeline:Render
protected override void Render( renderContext, List cameras)

  1. BeginContextRendering
  2. render Cameras (RenderCameraStack)
  3. EndContextRendering(EndFrameRendering)

5. List of actions

1. Scene Cull Cull
  1. Execute context.Cull() to get cullResults, initialize RenderingData (the data container for Camera in this frame)
2. Renderer. Setup
  1. Determine which RenderPass ( m_ ActiveRenderPassQueue ) there are in this frame, the forward deferred pipeline pass is different
  2. RenderPass rendering step. Simply put, a render pass is a single execution of the rendering pipeline. The render pass renders the output image into a set of framebuffer attachments in memory.
  3. Determine if each pass is required based on RenderingData
    insert image description here
    insert image description here
3. Renderer. Execute
  1. Sort the pass list by RenderPassEvent (render queue)
  2. Execute RenderBlock in stages (BeforeRendering, MainRenderingOpaque, MainRenderingTransparent, AfterRendering), each RenderBlock contains all RenderPass of this stage
  3. Perform some shader uniform ( light, camera, time ) setup work between stages
  4. Execute the Execute of each RenderPass, and internally call the various underlying interfaces of the Context
  5. After Execute is completed, execute Context.Submit() to submit the rendering command
4. Summary
  1. After using SRP, RenderPipeline is created by RenderPipelineAsset, and RenderPipeline.Render() is executed every frame
  2. URP internal RenderData creates Renderer, Camera can specify the Renderer to use
  3. In URP, RenderPipeline.Render() mainly executes RenderCameraStack for each Camera, and executes RenderSingleCamera for each Camera in the stack
  4. RenderSingleCamera creates a RenderPass queue and executes one by one
  5. RenderPass internally calls the underlying interface of ScriptableRenderContext to implement rendering tasks

3. Sticker effect

1. Implementation method

  1. Add component Decal renderer feature for currently used URP RendererData (otherwise cannot use Projector)
  2. Add Decal Projector for empty objects
  3. Add Decal Material to Decal Projector
    insert image description here
    insert image description here

2. Decal Setting Technique (Decal Rendering Technique)

1. DBuffer
  1. Several buffers used for decals need to support MRT. First, output the decal information (base color, normal, MAOS metallicity AO smoothness) information to 3
    buffers, and then take the DBuffer information into account when rendering the scene, and draw them together onto the object (calculate the decal first, then draw the object)
  2. For DBuffer, add three passes:
    • CopyDepthPass
    • DBufferRenderPass
    • ForwardEmissivePass
  3. There is a DecalDrawSystem inside DBufferRenderPass. Each Decal Projector corresponds to an entity. Draw one by one, in fact, it is to draw Cube
2. Screen Space
  1. Without MRT, first draw the object to get the screen space depth, and then draw the decal. Because of the lack of some other information, the effect may be worse than DBuffer
3. GBuffer
  1. Use under delay pipeline

3. Implementation principle

insert image description here

Shader "Decal/DepthDecal"
{
    
    
	Properties
	{
    
    
		_MainTex ("Texture", 2D) = "white" {
    
    }
	}
	
	SubShader
	{
    
    
		Tags {
    
    "Queue"="Transparent+100"}
		Pass
		{
    
    
			Blend SrcAlpha OneMinusSrcAlpha
			CGPROGRAM
			#pragma vertex vert
			#pragma fragment frag
			
			#include "UnityCG.cginc"
 
			struct appdata
			{
    
    
				float4 vertex : POSITION;
			};
 
			struct v2f
			{
    
    
				float4 vertex : SV_POSITION;
				float4 screenPos : TEXCOORD1;
				float3 ray : TEXCOORD2;
			};
 
			sampler2D _MainTex;
			sampler2D_float _CameraDepthTexture;
			
			v2f vert (appdata v)
			{
    
    
				v2f o;
				o.vertex = UnityObjectToClipPos(v.vertex);
				// 屏幕坐标
				o.screenPos = ComputeScreenPos(o.vertex);
				// 从屏幕发出的射线
				o.ray =UnityObjectToViewPos(v.vertex) * float3(-1,-1,1);
				return o;
			}
			
			fixed4 frag (v2f i) : SV_Target
			{
    
    
				//深度重建视空间坐标
				float2 screenuv = i.screenPos.xy / i.screenPos.w;
				float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, screenuv);
				// 深度
				float viewDepth = Linear01Depth(depth) * _ProjectionParams.z;
				// 射线方向✖️深度得到表面坐标
				float3 viewPos = i.ray * viewDepth / i.ray.z;
				//转化到世界空间坐标
				float4 worldPos = mul(unity_CameraToWorld, float4(viewPos, 1.0));
				//转化为物体空间坐标
				float3 objectPos = mul(unity_WorldToObject, worldPos);
				//剔除掉在立方体外面的内容
				clip(float3(0.5, 0.5, 0.5) - abs(objectPos));
				//使用物体空间坐标的xz坐标作为采样uv
				float2 uv = objectPos.xz + 0.5;
				fixed4 col = tex2D(_MainTex, uv);
				return col;
			}
			ENDCG
		}
		
	}
}

4. URP pass channel

1. Basic pass

insert image description here

2. Insert extra pass

  1. URP has built-in RenderObjects Decal ScreenSpaceAmbientOcclusion ScreenSpaceShadows, you can add ScriptableRendererFeature subclass and insert Renderer by yourself
  2. Go to AddRenderPasses of RendererFeature to add pass in each frame of Setup
  3. Sort all passes and execute them one by one
    insert image description here

5. Renderer Feature

1. Use

Added at the same location as the decal above, for rendering after a specific render

2. Parameters

01 Name: First is the name of the Feature;
02 Event (event): When Unity executes the Renderer Feature, the execution order of the Event in the general rendering pipeline;
03 Filters: This setting allows us to configure the Renderer Feature Which objects to render; there are two parameters, one is Queue and the other is Layer Mask.
Queue: This Feature chooses to render transparent objects or opaque objects;
Layer Mask: This Feature chooses which layer to render objects in;
04 Shader Passes Name: If a pass in the shader has the LightMode Pass mark, then our Renderer Feature only Process LightMode Pass Tag equal to the Pass of this Shader Passes Name.
05 Overrides: When using this Renderer Feature for rendering, the settings in this part allow us to configure certain properties to override overrides.
Material: When rendering an object, Unity will use this material to replace the material assigned to it.
Depth: Select this option to specify how this Renderer Feature affects or uses the depth buffer. This option includes the following:
Write Depth: Write depth, this option defines whether this Renderer Feature updates the depth buffer when rendering the object.
Depth Test: The depth test determines whether the renderer feature renders the fragment of the object.
Stencil: When this checkbox is checked, the Renderer will process the stencil buffer value.
Camera: Selecting this option allows you to override the following Camera properties:
Field of View: When rendering objects, the renderer function uses this Field of View instead of the value specified on the camera.
Position Offset: When rendering the object, the Renderer Feature will move it by this offset.
Restore: Check this option, the Renderer Feature will restore the original camera matrix after the rendering process is performed in this Renderer Feature.
insert image description here

6. URP shader

1. Declare as URP shader

It needs to be indicated in the tag of SubShader

Tags{
    
    "RenderPipeline"="UniversalPipeline"}

2. Define different Pass channels

  1. This is the default rendering pass of urp, which can handle multiple light sources, self-illumination, and ambient light and fog.
Tags{
    
    "LightMode"="UniversalForward"}
  1. This is used to render object shadow casting
Tags{
    
    "LightMode" = "ShadowCaster"}
  1. This is for rendering the depth map. If you want to do something in screen space, you need to render this pass rendering.
Tags{
    
    "LightMode" = "DepthOnly"}
  1. This is rendering a texture map with depth and normal
Tags{
    
    "LightMode" = "DepthNormals"}
  1. This pass is used to bake lightmaps
Tags{
    
    "LightMode" = "Meta"}
  1. This is used for 2d rendering. Generally, we use unity to be 3d, so I directly ignore this pass.
Tags{
    
     "LightMode" = "Universal2D" }

3. HlSL

  1. ShaderLab is a code declaration block containing the Shader language. In the Built-In pipeline, we often use CG language to write Shader, but in URP, HLSL is generally used. CG language and HLSL language are both C-style languages, and there is not much difference in writing.
  2. Due to the different languages ​​used, CGPROGRAM and ENDCG containing CG language need to be changed to HLSLPROGRAM and ENDHLSL containing HLSL.
HLSLPROGRAM
    ……
ENDHLSL

4. Header files

The underlying code used has changed, and the default included header file needs to be #include "UnityCG.cginc"changed#include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"

In addition to the core tool library, other header files to be included such as lighting and shadows also need to be modified.

After using different header files, some of the various tool functions are naturally different, such as the method of converting space in the vertex shader UnityObjectToClipPos(v.vertex); in URP, use GetVertexPositionInputs(IN.positionOS.xyz); to Get a structure that stores the Position coordinates of each space, and then obtain the position coordinates in the clipping space from it.

VertexPositionInputs positionInputs = GetVertexPositionInputs(IN.positionOS.xyz);
OUT.positionCS = positionInputs.positionCS;

5. CBUFFER

In order to support SRP Batcher, all exposed parameters (except textures) should be included in Shader between CBUFFER_START(UnityPerMaterial) and CBUFFER_END. And in order to ensure that each subsequent Pass can have the same CBUFFER, this piece of code needs to be written in SubShader, before other Passes.

CBUFFER_START(UnityPerMaterial)
    float4 _BaseMap_ST;
    float _Color;
CBUFFER_END

6. Texture sampling

In CG we generally use sampler2D _Texture to sample textures. In HLSL, texture sampling needs to be declared as Sampler. The specific sampling method is also changed from the original tex2D(Texture tex,float uv); toSAMPLE_TEXTURE2D(Texture tex, Sampler sampler,float uv);

TEXTURE2D(_BaseMap);
SAMPLER(sampler_BaseMap);
……
float4 frag(Varings IN):SV_Target
{
    
    
    ……
    half4 baseMap = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, IN.uv);     
    ……
}

7. Examples

Outer stroke effect

Shader "Unlit/outline"
{
    
    
    Properties
    {
    
    
        _BaseMap ("Base Texture", 2D) = "white"{
    
    }
        _BaseColor ("Color", color) = (0, 0, 0, 0)
        _Outline ("Outline", Range(0, 1)) = 0.1
    }
    SubShader
    {
    
    
        Tags
        {
    
    
            // 表明这是一个URP Shader
            "RenderPipeline"="UniversalPipeline"
        }
        Cull Front
        
        HLSLINCLUDE
         // 引入头文件,类似于 CG中核心代码库 "UnityCG.cginc"
        #include "Packages/com.unity.render-pipelines.universal/ShaderLibrary/Core.hlsl"
       
        // 除了贴图外,要暴露在Inspector面板上的变量都需要缓存到CBUFFER中
        CBUFFER_START(UnityPerMaterial)
        // 在CG中会写成sampler2D _MainTex;
        TEXTURE2D(_BaseMap);
        SAMPLER(sampler_BaseMap);
        float4 _BaseMap_ST;
        float4 _BaseColor;
        float _Outline;
        CBUFFER_END
        
        ENDHLSL
 
        Pass
        {
    
    
            // 这个是urp的默认渲染pass,里面可以处理多光源,自发光,以及环境光和雾等等。
            Tags{
    
    "LightMode"="UniversalForward"}

            HLSLPROGRAM
            #pragma vertex vert
            #pragma fragment frag
 
            struct Attributes
            {
    
    
                float4 positionOS : POSITION;
                float2 uv : TEXCOORD;
                float3 normal : NORMAL;
            };
            struct Varings
            {
    
    
                float4 positionCS : SV_POSITION;
                float2 uv : TEXCOORD;
            };
 
 
            Varings vert(Attributes IN)
            {
    
    
                Varings OUT;
                IN.positionOS.xyz += IN.normal * _Outline;
                //在CG里面,我们这样转换空间坐标 o.vertex = UnityObjectToClipPos(v.vertex);
                VertexPositionInputs positionInputs = GetVertexPositionInputs(IN.positionOS.xyz);
                OUT.positionCS = positionInputs.positionCS;
 
                OUT.uv=TRANSFORM_TEX(IN.uv,_BaseMap);
                return OUT;
            }
 
            float4 frag(Varings IN):SV_Target
            {
    
    
                //在CG里,我们这样对贴图采样 fixed4 col = tex2D(_MainTex, i.uv);
                half4 baseMap = SAMPLE_TEXTURE2D(_BaseMap, sampler_BaseMap, IN.uv);                
                return baseMap * _BaseColor;
            }
            ENDHLSL
        }
    }
}

Guess you like

Origin blog.csdn.net/qq_50682713/article/details/126828917