[Unity Basics] Sharing notes on entry-level learning of Universal Render Pipeline

1. Main content to share and learn

        This article is an introductory-level article, and it is geared toward personal study notes. If there are any mistakes in the article, I hope you will point them out rationally. Thank you all. Because URP itself is a relatively complicated thing and involves a lot of things. The following is mainly about the creation and simple use of Unity URP (experienced users may need to know all aspects of rendering and various rendering effects. Afterwards, Processing, etc.), related concepts, principles, source code analysis, and analysis of the Shader script that comes with URP (mainly Lit scripts).

A brief enumeration of the learning content:
1. The concept, creation and use
of URP 2. URP operating logic and source code analysis
3. Compilation of rendering knowledge involved in URP’s own Shader script and Lit script

2. The concept, creation and use of URP

1.The concept of URP

What is I.URP

The full name of URP, Universal Render Pipeline, is a model of SRP (Scriptable Render Pipeline, editable rendering pipeline). Its predecessor is LWRP (Light Weight Render Pipeline, lightweight rendering pipeline). In the 2019.3 version of Unity, LWRP was officially renamed URP, and the LWRP name officially withdrew from the stage of history. URP is mainly used on mobile terminals.

II. Advantages of using URP rendering pipeline

Scalability
SRP was first proposed in Unity to improve the editability of the rendering pipeline and provide users with space for pipeline customization to
meet the needs of different projects. URP is a model of SRP, which itself modularizes functions. In this way we can add different functional modules to meet corresponding needs.

Performance comparison
The biggest difference in performance is that URP is a single-pass forward rendering pipeline, while the built-in pipeline is a multi-pass forward rendering pipeline. The so-called forward rendering means that when the rendering object is illuminated by a point light, the impact of each point light on the object is calculated separately, and finally the color of the final object is obtained by adding the rendering results of all lights. The approach of the built-in pipeline is to use multiple passes to render lighting. The first pass only renders the main light source, and then each additional light is rendered with a separate pass. This is why we rarely use point light sources when making mobile games. Because for the built-in pipeline, every additional light will double the drawcalls of the entire scene. This performance overhead is basically unacceptable.
The method of URP is to calculate all the light sources received by the object through a for loop in one pass. The advantages of this are:
the illumination of an object can be calculated in one DrawCall,
eliminating the cost of context switching and rasterization of multiple Passes.
However, the disadvantages of this are also obvious:
only one direct light is
supported for a single object at most. 4 spot lights.
A single camera supports up to 16 lights.
Therefore, with URP, as long as we control the range of the spot light, we can also do multiple spot lights in mobile games, such as releasing a fireball to illuminate surrounding objects. In the built-in pipeline, it is basically a function that can be given up.

Support SRP Batch.
SRP Batcher can be used in all SRP pipelines. This function can save objects using the same Shader that have not been statically merged and cannot be rendered through Instancing, and use CBuffer to save the parameters of each object's material ball, and then Complete drawing without doing SetPassCall. The effect of this function is that it can greatly reduce the overhead of a single DrawCall for the same DrawCall. When this function is turned on, you will find that maybe your scene has 500 DrawCalls, but in fact there are less than 100 SetPassCalls. In the same situation, The rendering performance is much higher than the built-in pipeline. However, there are still some conditions for Shader to support SRP Batcher. Please refer to the SRP Batcher documentation for details.

Easily update the rendering function
. For example, when we use the built-in rendering pipeline function and want to update the built-in rendering pipeline, we need to update the Unity version of the pipeline. But when we use URP, we only need to pipeline the URP package.

2.Universal installation and creation of process records

Step 1. Click the window on the Unity menu bar to open the Package Manager below.
Insert image description here
Step 2. Open the Package Manager panel, search for Universal RP, and install it. Wait for the installation to complete.
Insert image description here
Step 3. Click Assets->Create->Rendering->URP Asset (with Universal Renderer) on the Unity menu bar. After clicking, an Asset file and a Data file can be created. The relationship between Asset files and Data files needs to be mentioned below.
Insert image description here
Step 4, click Project Setting on the Unity menu bar. Click to open Graphics in the Project Setting panel, and place the Asset resource file of the URP created in the previous step on the Scriptable Render Pipeline Setting in the screenshot below. Unity will automatically generate the Global Setting file of Universal RP and the Shader that comes with the built-in pipeline cannot be used. , it indicates success.
Insert image description here
Open Framedebugger (window->analysis->framedebugger) to see the corresponding calling interface and custom pipeline.
Insert image description here

3.Universal RP Asset和Universal RP Data

Universal RP Asset is the resource file of Scriptable Render Pipeline Settings that is dragged into the Graphics option of the Project Setting panel. It is a URP pipeline asset, which is where the setting data is stored and various settings can be made. Through the explanation given by Unity's official documentation, we can think that Universal RP Asset is an editable script object, which is actually a handler. For example, one handler (Asset) turns on the shadow settings, and another handler turns off the shadow settings. We can choose different handlers (Assets) to achieve different needs. For example, between the mobile terminal and the PC terminal, the performance of the mobile terminal is better. Weak, the performance overhead of real-time shadows is high, then select the handler (Asset) that turns off shadows and place it in the Scriptable Render Pipeline Settings.

Here we mainly sort out the settings of the URP's Asset file. Of course, these things can be queried through Unity's URP document.

I.Rendering item

This item is mainly the core part of setting the control rendering frame. Below is the screenshot of setting
Insert image description here
Depth Texture for this item so that URP can create _CameraDepthTexture. The URP then uses this depth texture by default for all cameras in the scene. This can be overridden for an individual camera in the Camera's Inspector.

Opaque Texture Enable this option to create a _CameraOpaqueTexture as the default for all cameras in the scene. This setting functions much like GrabPass in the built-in rendering pipeline. Opaque Texture provides a snapshot of the scene immediately before URP renders any transparent meshes. You can use it in a transparency shader to create effects like frosted glass, water refraction, or heat waves.

Opaque Downsampling sets the sampling mode on opaque textures to one of three options:
None: Produces a copy of the opaque channel using the same resolution as the camera.
2x Bilinear: Use bilinear filtering to generate half-resolution images.
4x Box: Uses box filtering to generate quarter resolution images. This produces a softly blurred copy.
4x Bilinear: Uses bilinear filtering to generate quarter-resolution images.

Terrain Holes If you disable this option, URP will remove all terrain hole shader variants when you build for Unity Player, thus reducing build times.

II.Quality item

Insert image description here
HDR Enable this option to perform rendering in high dynamic range (HDR) by default for every camera in the scene. When using HDR, the brightest parts of the image can be greater than 1. This provides a wider range of light intensities, making lighting look more realistic. With it, you can still see details and get less saturation, even in bright light. This is useful if you need multiple lights or use a bloom effect. If the target hardware is low-end, this property can be disabled to skip HDR calculations, resulting in better performance.

By default, MSAA uses Multi Sample Anti-aliasing (Multi Sample Anti-aliasing) technology for each camera in the scene when rendering. This softens the edges of the geometry so they don't appear jagged or flickering. In the drop-down menu, select the number of samples to use per pixel: 2x, 4x, or 8x. The more samples you select, the smoother the object edges will be. Select Disabled if you want to skip MSAA calculations, or if you do not need such calculations in 2D games. Note: On mobile platforms that do not support the StoreAndResolve store operation, if Opaque Texture is selected in the URP resource, Unity ignores the Anti Aliasing (MSAA) attribute at runtime (as if Anti Aliasing (MSAA) was set to Disabled).

Render Scale This slider is used to scale the render target resolution (not the resolution of the current device). Use this property if you are rendering at a smaller resolution for performance reasons or if you need to upscale the rendering to improve quality. This only scales game rendering. UI rendering remains at the device's native resolution.

Lighting
Insert image description here
These settings affect the light sources in the scene.
If some of these settings are disabled, the related keywords will be stripped from the shader variables. If you're sure you won't use certain settings in your game or application, you can disable them to improve performance and build times.

Main Light These settings affect the main directional light in the scene. To select this, specify it as a Sun Source in the Lighting Inspector. If you do not specify a Sun Source, URP will consider the brightest directional light in the scene as the main light source. You can choose between Pixel Lighting and None options. If None is selected, URP will not render the main light even if the sun light is set.

**Cast Shadows** Select this checkbox to cause the main light to cast shadows in the scene.

Shadow Resolution This property controls the size of the shadow map texture of the main light source. High resolution delivers sharper, more detailed shadows. If memory or rendering time is limited, try lowering the resolution.

Additional LightsHere you can select additional light sources to supplement the main light source. Options include Per Vertex, Per Pixel, and Disabled.
I will write a separate learning article about shadow troubleshooting.

Per Object Limit This slider can set the limit on the number of additional lights that affect each game object.

III.Shadows item

These settings let you configure how your shadows look and behave, and find a good balance between visual quality and performance.
Insert image description here
Max Distance Unity's maximum distance from the camera when rendering shadows. Unity will not render shadows beyond this distance.
Note: This property is in metric units, regardless of the value in the Working Unit property.

Working Unit Unity Unit for measuring shadow cascade distance.

Depth Bias Use this setting to lighten shadow dark spots.

Normal Bias Use this setting to lighten shadow dark spots.

Cascade CountThe number of shadow cascades. Using shadow cascades prevents shadows close to the camera from being too harsh and keeps shadow resolution reasonably low. For more information, see the Shadow Cascade page. Increasing the number of cascades reduces performance. The cascade setting only affects the main light.

Soft Shadows Check this checkbox to enable additional processing of shadow maps to make them look smoother.
When enabled, Unity uses the following shadow map filtering methods:
desktop: 5x5 tent filter, mobile: 4-tap filter.
Performance impact: High.
When this option is disabled, Unity samples the shadow map once using the default hardware filtering method.

IV.Post-processing item

This section is used to fine-tune global post-processing settings.
Post Processing This checkbox turns on (checkbox checked) or off (checkbox cleared) post-processing for the current URP resource.
If this checkbox is cleared, Unity excludes post-processing shaders and textures from the build unless one of the following conditions is true:
Other resources in the build refer to post-processing-related resources.
Another URP resource has the Post Processing attribute enabled.

Post Process Data This resource references the shaders and textures used by the renderer for post-processing.
Note: Only advanced customization use cases require changing this property.

Grading Mode Select the color grading mode to use for your project.
• High Dynamic Range: This mode is best suited for high-precision grading similar to film production workflows. Unity applies color grading before tone mapping.
• Low Dynamic Range: This mode follows a more classic workflow. Unity applies a limited range of color grading after tone mapping.

LUT Size sets the size of the internal and external lookup textures (LUTs) used by the Universal Render Pipeline for color grading. Larger sizes provide greater accuracy, but at potential performance and memory usage costs. LUT sizes cannot be mixed and matched, so decide on a size before starting the color grading process.
The default value is 32, which ensures a good balance between speed and quality.

3. URP operating logic, principles and source code analysis records

1. About SRP’s custom pipeline

Before reading the code of URP, you need to be familiar with SRP.
First, we can create the SRP Asset file through the following code

[CreateAssetMenu(menuName = "Rendering/Custom Render Pipeline")]
public class CustomRenderPipelineAsset : RenderPipelineAsset
{
    
    
    protected override RenderPipeline CreatePipeline()
    {
    
    
        return new CutomRenderPipeline();
    }
}

public class CutomRenderPipeline : RenderPipeline
{
    
    
    protected override void Render(ScriptableRenderContext context, Camera[] cameras)
    {
    
    
        
    }
}

After creating the above two scripts, you can directly create an Asset file of a CustomRenderPipeline. Click Assets/Create/Rendering/CustomRenderPipeline in the Unity toolbar to create the Asset file. After creating it, add the created Asset The file is configured in the Scriptable Render Pipeline Settings in the Graphics option of the Project Settings panel. As shown below:
Insert image description here
After successfully switching the Scriptable Render Pipeline Settings, if there is nothing in Unity's Scene panel and Game panel, it means the switch is successful. At this time, we need to write the Render method in the CutomRenderPipeline script to submit the rendering instructions and other operations. Now let's take a look at the Render method rewritten by CutomRenderPipeline. The parameters are ScriptableRenderContext and Camera array. In fact, I can understand that the ScriptableRenderContext here is a configuration. All cameras in the Camera array will use the ScriptableRenderContext object for the rendering process. After changing the Render method of CutomRenderPipeline, the code is as follows:

public class CutomRenderPipeline : RenderPipeline
{
    
    
    protected override void Render(ScriptableRenderContext context, Camera[] cameras)
    {
    
    
        //把当前摄像机的属性设置到全局的Shader属性中(比如view矩阵,project矩阵等数据)
        context.SetupCameraProperties(cameras[0]);
        
        //配置完摄像机属性后,利用摄像机属性对天空盒子的绘制
        context.DrawSkybox(cameras[0]);
        
        //把当前进行过的命令进行一次提交
        context.Submit();
    }
}

In this way we can draw the sky box.
Insert image description here

2.About CommandBuffer

CommandBuffer->Command Buffer. In fact, the CommandBuffer object is used to collect rendering instructions sent by the CPU to the GPU, and send the required instructions to the GPU at a suitable time point. CommandBuffer objects are generally instantiated through CommandBufferPool. Here, the Render method of CustomPipeline is slightly modified. The code is as follows:

public class CutomRenderPipeline : RenderPipeline
{
    
    
    private string m_commandBufferName = "我是一个测试用的CommandBuffer";
    protected override void Render(ScriptableRenderContext context, Camera[] cameras)
    {
    
    
        CommandBuffer cmd = CommandBufferPool.Get(m_commandBufferName);
        //把当前摄像机的属性设置到全局的Shader属性中(比如view矩阵,project矩阵等数据)
        context.SetupCameraProperties(cameras[0]);
        //对RT的深度缓存,颜色缓存进行清理
        cmd.ClearRenderTarget(true,true,Color.clear);
        //执行buffer
        context.ExecuteCommandBuffer(cmd);
        //清理
        cmd.Clear();
        //配置完摄像机属性后,利用摄像机属性对天空盒子的绘制
        context.DrawSkybox(cameras[0]);
        //把当前进行过的命令进行一次提交
        context.Submit(); 
    }
}

The above only uses the CommandBuffer object to clean up the color of the render target principle.

3. Draw the object currently observed by the camera

Next, we will render the objects on the scene. We first have to eliminate things that cannot be seen by the camera. Although the GPU will also eliminate them, eliminating them in the program can reduce the communication bandwidth pressure between the CPU and GPU, so Write the following code

private bool Cull(ScripttableRenderContext context,Camera camera){
    
    
    if(camera.tryGetCullingParameters(out ScriptableCullingParameters p)){
    
    
        m_cullingResults = context.Cull(ref p);
        return true;
    }
    
    return false;
}

The overall code of the modified CustomRenderPipeline class is

public class CutomRenderPipeline : RenderPipeline
{
    
    
    private string m_commandBufferName = "我是一个测试用的CommandBuffer";
    private static ShaderTagId m_unlitShaderTagId = new ShaderTagId("SRPDefaultUnlit");
    private CullingResults m_cullingResults;

    protected override void Render(ScriptableRenderContext context, Camera[] cameras)
    {
    
    
        if (!Cull(context, cameras[0]))
        {
    
    
            return;
        }
        Setup(context,cameras[0]);
        DrawVisibleGeometry(context, cameras[0]);
        DrawSkyBox(context,cameras[0]);
        context.Submit();
    }

    private bool Cull(ScriptableRenderContext context, Camera camera)
    {
    
    
        if (camera.TryGetCullingParameters(out ScriptableCullingParameters p))
        {
    
    
            m_cullingResults = context.Cull(ref p);
            return true;
        }

        return false;
    }

    private void Setup(ScriptableRenderContext context, Camera camera)
    {
    
    
        context.SetupCameraProperties(camera);
        CommandBuffer cmd = CommandBufferPool.Get(m_commandBufferName);
        cmd.ClearRenderTarget(true, true, Color.clear);
        context.ExecuteCommandBuffer(cmd);
        cmd.Clear();
    }

    //绘制图像
    private void DrawVisibleGeometry(ScriptableRenderContext context, Camera camera)
    {
    
    
        var sortingSetting = new SortingSettings(camera);
        var drawingSetting = new DrawingSettings(m_unlitShaderTagId, sortingSetting);
        var filteringSetting = new FilteringSettings(RenderQueueRange.all);
        context.DrawRenderers(m_cullingResults, ref drawingSetting, ref filteringSetting);  
    }
    
    //绘制天空盒子
    private void DrawSkyBox(ScriptableRenderContext context, Camera camera)
    {
    
    
        context.DrawSkybox(camera);
    }
}

From the above code, because at this time I clicked to open Frame Debug, I found that the object was indeed rendered.
Insert image description here
But here it is found that the object's material Shader appears.
Insert image description here
This is because my code above chooses to use the SRPDefaultUnlit pass for coloring, and because of the installed URP package, the default materials used in the current scene are Lit. The name of the Lit pass is ForwardLit, because currently SPRDefaultUnlit is used for coloring. , we need to re-create a material, that is, we can choose Color/Unlit without lighting. As shown in the following two pictures:
Insert image description here
Insert image description here
At this point, the SRP script has completed simple writing and learning. Of course, for more details, you can refer to Catlike Coding. The instructions are more detailed and will explain in more detail multi-camera processing, multi-level elimination, translucent rendering and opaque rendering, etc.

3. Slowly analyze the logic and principles of URP operation

This section mainly records some personal experiences when reading the URP source code.
Let’s first take a look at the package downloaded by Unity. It contains Editor files, Runtime files, other shader files, shader libraries, etc. In fact, the main purpose here is to check the code in the Runtime file. The Editor is actually the code for custom editing and extension of Unity.
Insert image description here
After starting the game and opening the Frame Debuger, you will see what he executed after I captured a frame.
Insert image description here
We can see that we only have one main camera, so our URP executes the rendering commands related to the main camera. When adding another camera, it will It is found that URP will execute the rendering command of another camera. This must be a traversal of all cameras. First of all, we briefly learned how to write SRP extensions in Section 2. We first found the class that inherits RenderPipeline in URP, because basically we customize the rendering pipeline to inherit RenderPipelineAsset as the entry point to create a subclass object of the RenderPipeline class, and then pass Rewrite the Render method to submit rendering instructions. Here we first find the UniversalRenderPipelineAsset class, and then look at the CreatePipeline() method inside as follows:

protected override RenderPipeline CreatePipeline()
        {
    
    
            if (m_RendererDataList == null)
                m_RendererDataList = new ScriptableRendererData[1];

            // If no default data we can't create pipeline instance
            if (m_RendererDataList[m_DefaultRendererIndex] == null)
            {
    
    
                // If previous version and current version are miss-matched then we are waiting for the upgrader to kick in
                if (k_AssetPreviousVersion != k_AssetVersion)
                    return null;

                if (m_RendererDataList[m_DefaultRendererIndex].GetType().ToString()
                    .Contains("Universal.ForwardRendererData"))
                    return null;

                Debug.LogError(
                    $"Default Renderer is missing, make sure there is a Renderer assigned as the default on the current Universal RP asset:{UniversalRenderPipeline.asset.name}",
                    this);
                return null;
            }

            DestroyRenderers();
            var pipeline = new UniversalRenderPipeline(this);
            CreateRenderers();

            // Blitter can only be initialized after renderers have been created and ResourceReloader has been
            // called on potentially empty shader resources
            foreach (var data in m_RendererDataList)
            {
    
    
                if (data is UniversalRendererData universalData)
                {
    
    
                    Blitter.Initialize(universalData.shaders.coreBlitPS, universalData.shaders.coreBlitColorAndDepthPS);
                    break;
                }
            }

            return pipeline;
        }

You can see that there are some operations to obtain UniversalURP data. Let’s look down at the created UniversalRenderPipeline, mainly to see what the Render method of this method does.

#if UNITY_2021_1_OR_NEWER
        /// <inheritdoc/>
        protected override void Render(ScriptableRenderContext renderContext, List<Camera> cameras)
#else
        /// <inheritdoc/>
        protected override void Render(ScriptableRenderContext renderContext, Camera[] cameras)
#endif
        {
    
    
#if RENDER_GRAPH_ENABLED
            useRenderGraph = asset.enableRenderGraph;
#else
            useRenderGraph = false;
#endif

            SetHDRState(cameras);

            // When HDR is active we render UI overlay per camera as we want all UI to be calibrated to white paper inside a single pass
            // for performance reasons otherwise we render UI overlay after all camera
            SupportedRenderingFeatures.active.rendersUIOverlay = HDROutputIsActive();

            // TODO: Would be better to add Profiling name hooks into RenderPipelineManager.
            // C#8 feature, only in >= 2020.2
            using var profScope = new ProfilingScope(null, ProfilingSampler.Get(URPProfileId.UniversalRenderTotal));

#if UNITY_2021_1_OR_NEWER
            using (new ProfilingScope(null, Profiling.Pipeline.beginContextRendering))
            {
    
    
                BeginContextRendering(renderContext, cameras);
            }
#else
            using (new ProfilingScope(null, Profiling.Pipeline.beginFrameRendering))
            {
    
    
                BeginFrameRendering(renderContext, cameras);
            }
#endif

            GraphicsSettings.lightsUseLinearIntensity = (QualitySettings.activeColorSpace == ColorSpace.Linear);
            GraphicsSettings.lightsUseColorTemperature = true;
            GraphicsSettings.defaultRenderingLayerMask = k_DefaultRenderingLayerMask;
            SetupPerFrameShaderConstants();
            XRSystem.SetDisplayMSAASamples((MSAASamples)asset.msaaSampleCount);

#if UNITY_EDITOR
            // We do not want to start rendering if URP global settings are not ready (m_globalSettings is null)
            // or been deleted/moved (m_globalSettings is not necessarily null)
            if (m_GlobalSettings == null || UniversalRenderPipelineGlobalSettings.instance == null)
            {
    
    
                m_GlobalSettings = UniversalRenderPipelineGlobalSettings.Ensure();
                if(m_GlobalSettings == null) return;
            }
#endif

#if DEVELOPMENT_BUILD || UNITY_EDITOR
            if (DebugManager.instance.isAnyDebugUIActive)
                UniversalRenderPipelineDebugDisplaySettings.Instance.UpdateFrameTiming();
#endif

            SortCameras(cameras);
#if UNITY_2021_1_OR_NEWER
            for (int i = 0; i < cameras.Count; ++i)
#else
            for (int i = 0; i < cameras.Length; ++i)
#endif
            {
    
    
                var camera = cameras[i];
                if (IsGameCamera(camera))
                {
    
    
                    RenderCameraStack(renderContext, camera);
                }
                else
                {
    
    
                    using (new ProfilingScope(null, Profiling.Pipeline.beginCameraRendering))
                    {
    
    
                        BeginCameraRendering(renderContext, camera);
                    }
#if VISUAL_EFFECT_GRAPH_0_0_1_OR_NEWER
                    //It should be called before culling to prepare material. When there isn't any VisualEffect component, this method has no effect.
                    VFX.VFXManager.PrepareCamera(camera);
#endif
                    UpdateVolumeFramework(camera, null);

                    RenderSingleCameraInternal(renderContext, camera);

                    using (new ProfilingScope(null, Profiling.Pipeline.endCameraRendering))
                    {
    
    
                        EndCameraRendering(renderContext, camera);
                    }
                }
            }

            s_RenderGraph.EndFrame();

#if UNITY_2021_1_OR_NEWER
            using (new ProfilingScope(null, Profiling.Pipeline.endContextRendering))
            {
    
    
                EndContextRendering(renderContext, cameras);
            }
#else
            using (new ProfilingScope(null, Profiling.Pipeline.endFrameRendering))
            {
    
    
                EndFrameRendering(renderContext, cameras);
            }
#endif

#if ENABLE_SHADER_DEBUG_PRINT
            ShaderDebugPrintManager.instance.EndFrame();
#endif
        }

According to the above code, first set the status of HDR and make some status settings. The core content of Render is nothing more than BeginRender (start rendering) -> traverse the camera array ----> EndRendering (end rendering). The main purpose of
traversing cameras is to process the main camera and the stacked camera separately (stacked cameras can be seen below. The official manual describes it very clearly), if it is a stacked camera, just complete the process normally: BeginCamera -> RenderingCamera ->EndCamera, RenderingCamera. This process basically takes place in the RenderCameraStack method. Here we mainly study the internal loop of the camera. Code segment judgment. As you can see from the above code, as long as the camera of the game basically enters the RenderCameraStack method, the following is the code snippet of the RenderCameraStack method. The main thing to do below is basically to find the last camera in the stack and let the last camera in the stack One camera is the camera that finally outputs the screen, and other cameras are output to it. There is no doubt that the bottom layer uses CommandBuff to collect commands, submit commands through CommadBuff and Context, use an RT to put the input color of the previous camera, and finally put the last camera The final camera as output.

/// <summary>
        /// Renders a camera stack. This method calls RenderSingleCamera for each valid camera in the stack.
        /// The last camera resolves the final target to screen.
        /// </summary>
        /// <param name="context">Render context used to record commands during execution.</param>
        /// <param name="camera">Camera to render.</param>
        static void RenderCameraStack(ScriptableRenderContext context, Camera baseCamera)
        {
    
    
            using var profScope = new ProfilingScope(null, ProfilingSampler.Get(URPProfileId.RenderCameraStack));

            baseCamera.TryGetComponent<UniversalAdditionalCameraData>(out var baseCameraAdditionalData);

            // Overlay cameras will be rendered stacked while rendering base cameras
            if (baseCameraAdditionalData != null && baseCameraAdditionalData.renderType == CameraRenderType.Overlay)
                return;

            // Renderer contains a stack if it has additional data and the renderer supports stacking
            // The renderer is checked if it supports Base camera. Since Base is the only relevant type at this moment.
            var renderer = baseCameraAdditionalData?.scriptableRenderer;
            bool supportsCameraStacking = renderer != null && renderer.SupportsCameraStackingType(CameraRenderType.Base);
            List<Camera> cameraStack = (supportsCameraStacking) ? baseCameraAdditionalData?.cameraStack : null;

            bool anyPostProcessingEnabled = baseCameraAdditionalData != null && baseCameraAdditionalData.renderPostProcessing;
            int rendererCount = asset.m_RendererDataList.Length;

            // We need to know the last active camera in the stack to be able to resolve
            // rendering to screen when rendering it. The last camera in the stack is not
            // necessarily the last active one as it users might disable it.
            int lastActiveOverlayCameraIndex = -1;
            if (cameraStack != null)
            {
    
    
                var baseCameraRendererType = baseCameraAdditionalData?.scriptableRenderer.GetType();
                bool shouldUpdateCameraStack = false;

                cameraStackRequiresDepthForPostprocessing = false;

                for (int i = 0; i < cameraStack.Count; ++i)
                {
    
    
                    Camera currCamera = cameraStack[i];
                    if (currCamera == null)
                    {
    
    
                        shouldUpdateCameraStack = true;
                        continue;
                    }

                    if (currCamera.isActiveAndEnabled)
                    {
    
    
                        currCamera.TryGetComponent<UniversalAdditionalCameraData>(out var data);

                        // Checking if the base and the overlay camera is of the same renderer type.
                        var currCameraRendererType = data?.scriptableRenderer.GetType();
                        if (currCameraRendererType != baseCameraRendererType)
                        {
    
    
                            Debug.LogWarning("Only cameras with compatible renderer types can be stacked. " +
                                             $"The camera: {currCamera.name} are using the renderer {currCameraRendererType.Name}, " +
                                             $"but the base camera: {baseCamera.name} are using {baseCameraRendererType.Name}. Will skip rendering");
                            continue;
                        }

                        var overlayRenderer = data.scriptableRenderer;
                        // Checking if they are the same renderer type but just not supporting Overlay
                        if ((overlayRenderer.SupportedCameraStackingTypes() & 1 << (int)CameraRenderType.Overlay) == 0)
                        {
    
    
                            Debug.LogWarning($"The camera: {currCamera.name} is using a renderer of type {renderer.GetType().Name} which does not support Overlay cameras in it's current state.");
                            continue;
                        }

                        if (data == null || data.renderType != CameraRenderType.Overlay)
                        {
    
    
                            Debug.LogWarning($"Stack can only contain Overlay cameras. The camera: {currCamera.name} " +
                                             $"has a type {data.renderType} that is not supported. Will skip rendering.");
                            continue;
                        }

                        cameraStackRequiresDepthForPostprocessing |= CheckPostProcessForDepth();

                        anyPostProcessingEnabled |= data.renderPostProcessing;
                        lastActiveOverlayCameraIndex = i;
                    }
                }
                if (shouldUpdateCameraStack)
                {
    
    
                    baseCameraAdditionalData.UpdateCameraStack();
                }
            }

            // Post-processing not supported in GLES2.
            anyPostProcessingEnabled &= SystemInfo.graphicsDeviceType != GraphicsDeviceType.OpenGLES2;

            bool isStackedRendering = lastActiveOverlayCameraIndex != -1;

            // Prepare XR rendering
            var xrActive = false;
            var xrRendering = baseCameraAdditionalData?.allowXRRendering ?? true;
            var xrLayout = XRSystem.NewLayout();
            xrLayout.AddCamera(baseCamera, xrRendering);

            // With XR multi-pass enabled, each camera can be rendered multiple times with different parameters
            foreach ((Camera _, XRPass xrPass) in xrLayout.GetActivePasses())
            {
    
    
                if (xrPass.enabled)
                {
    
    
                    xrActive = true;
                    UpdateCameraStereoMatrices(baseCamera, xrPass);
                }


                using (new ProfilingScope(null, Profiling.Pipeline.beginCameraRendering))
                {
    
    
                    BeginCameraRendering(context, baseCamera);
                }
                // Update volumeframework before initializing additional camera data
                UpdateVolumeFramework(baseCamera, baseCameraAdditionalData);
                InitializeCameraData(baseCamera, baseCameraAdditionalData, !isStackedRendering, out var baseCameraData);
                RenderTextureDescriptor originalTargetDesc = baseCameraData.cameraTargetDescriptor;

#if ENABLE_VR && ENABLE_XR_MODULE
                if (xrPass.enabled)
                {
    
    
                    baseCameraData.xr = xrPass;

                    // Helper function for updating cameraData with xrPass Data
                    // Need to update XRSystem using baseCameraData to handle the case where camera position is modified in BeginCameraRendering
                    UpdateCameraData(ref baseCameraData, baseCameraData.xr);

                    // Handle the case where camera position is modified in BeginCameraRendering
                    xrLayout.ReconfigurePass(baseCameraData.xr, baseCamera);
                    XRSystemUniversal.BeginLateLatching(baseCamera, baseCameraData.xrUniversal);
                }
#endif
                // InitializeAdditionalCameraData needs to be initialized after the cameraTargetDescriptor is set because it needs to know the
                // msaa level of cameraTargetDescriptor and XR modifications.
                InitializeAdditionalCameraData(baseCamera, baseCameraAdditionalData, !isStackedRendering, ref baseCameraData);

#if VISUAL_EFFECT_GRAPH_0_0_1_OR_NEWER
                //It should be called before culling to prepare material. When there isn't any VisualEffect component, this method has no effect.
                VFX.VFXManager.PrepareCamera(baseCamera);
#endif
#if ADAPTIVE_PERFORMANCE_2_0_0_OR_NEWER
                if (asset.useAdaptivePerformance)
                    ApplyAdaptivePerformance(ref baseCameraData);
#endif
                // update the base camera flag so that the scene depth is stored if needed by overlay cameras later in the frame
                baseCameraData.postProcessingRequiresDepthTexture |= cameraStackRequiresDepthForPostprocessing;

                RenderSingleCamera(context, ref baseCameraData, anyPostProcessingEnabled);
                using (new ProfilingScope(null, Profiling.Pipeline.endCameraRendering))
                {
    
    
                    EndCameraRendering(context, baseCamera);
                }

                // Late latching is not supported after this point
                if (baseCameraData.xr.enabled)
                    XRSystemUniversal.EndLateLatching(baseCamera, baseCameraData.xrUniversal);

                if (isStackedRendering)
                {
    
    
                    for (int i = 0; i < cameraStack.Count; ++i)
                    {
    
    
                        var currCamera = cameraStack[i];
                        if (!currCamera.isActiveAndEnabled)
                            continue;

                        currCamera.TryGetComponent<UniversalAdditionalCameraData>(out var currAdditionalCameraData);
                        // Camera is overlay and enabled
                        if (currAdditionalCameraData != null)
                        {
    
    
                            // Copy base settings from base camera data and initialize initialize remaining specific settings for this camera type.
                            CameraData overlayCameraData = baseCameraData;
                            overlayCameraData.camera = currCamera;
                            overlayCameraData.baseCamera = baseCamera;

                            UpdateCameraStereoMatrices(currAdditionalCameraData.camera, xrPass);

                            using (new ProfilingScope(null, Profiling.Pipeline.beginCameraRendering))
                            {
    
    
                                BeginCameraRendering(context, currCamera);
                            }
#if VISUAL_EFFECT_GRAPH_0_0_1_OR_NEWER
                            //It should be called before culling to prepare material. When there isn't any VisualEffect component, this method has no effect.
                            VFX.VFXManager.PrepareCamera(currCamera);
#endif
                            UpdateVolumeFramework(currCamera, currAdditionalCameraData);

                            bool lastCamera = i == lastActiveOverlayCameraIndex;
                            InitializeAdditionalCameraData(currCamera, currAdditionalCameraData, lastCamera, ref overlayCameraData);

                            xrLayout.ReconfigurePass(overlayCameraData.xr, currCamera);

                            RenderSingleCamera(context, ref overlayCameraData, anyPostProcessingEnabled);

                            using (new ProfilingScope(null, Profiling.Pipeline.endCameraRendering))
                            {
    
    
                                EndCameraRendering(context, currCamera);
                            }
                        }
                    }
                }

                if (baseCameraData.xr.enabled)
                    baseCameraData.cameraTargetDescriptor = originalTargetDesc;
            }

            if (xrActive)
            {
    
    
                CommandBuffer cmd = CommandBufferPool.Get();
                XRSystem.RenderMirrorView(cmd, baseCamera);
                context.ExecuteCommandBuffer(cmd);
                context.Submit();
                CommandBufferPool.Release(cmd);
            }

            XRSystem.EndLayout();
        }

4. Compilation of rendering knowledge involved in URP’s own Shader script and Lit script

Organize later.

5. Related reading links

Unity official reading Universal RP document
Unity Universal RP Manual manual
Catlike Coding's Scriptable RP tutorial
enters the world of LWRP (Universal RP)

Guess you like

Origin blog.csdn.net/qq_41094072/article/details/131884750