Unity's CommandBuffer Introduction

  Hello everyone, I am Zhao.
  I have introduced the use of PostProcessing to do screen post-processing effects before. We don't necessarily have to use PostProcessing for post-processing effects.
  PostProcessing has powerful functions. For example, different layers control different screen effects. For example, you can use the non-global effect of PostProcessVolume to achieve a transitional post-processing effect within a certain range. But if we don't use these effects, but simply want to add a screen effect of a specified camera when we need it, then there are actually many choices, such as Unity itself provides Graphics interface, and CommandBuffer-related methods , it is also possible to directly create screen effects.
  From the perspective of the realization principle of PostProcessing, he actually uses CommandBuffer to realize it, but only encapsulates it.
  Let's introduce CommandBuffer.

1. Graphics and CommandBuffer

  Before the emergence of PostProcessing, we can also make screen post-processing effects. The general method is:
  in the OnRenderImage statement cycle method, use the Graphics.Blit(source, destination, material) method to render the screen rendering results passed in by the camera. The shader in the specified shader processes to get a new screen rendering effect, and then displays it on the screen.
  Graphics appear here.
  Graphics is a graphics drawing interface provided by Unity. In addition to the Blit method, there are many other drawing methods, such as CopyTexture, DrawMesh, DrawTexture, etc.
  Then look at CommandBuffer, which also provides the CommandBuffer.Blit method, as well as methods such as DrawMesh. In fact, many people start to get a little confused when they reach this step. What is the difference between Graphics and CommandBuffer? Why do they all provide similar methods? When should Graphics be used and when should CommandBuffer be used?
  Although the two are similar, the meanings they represent are different.

1、Graphics

  When we call the Graphics method, it will take effect directly, or it will take effect in the next frame. For example, if we call the Graphics.Blit method, it will immediately process the original image through the shader and output a new image. For example, our DrawMesh will render the mesh model we need in the next frame. If you use DrawMeshNow, it will render the mesh model immediately in the same frame. It is a command that is actually executed.

2、CommandBuffer

  CommandBuffer is an object, which is actually a list of commands. We can create a CommandBuffer object, and then add the things that need it to do to the command list of this object. As for when to execute, it can be controlled. For example, we can add this list command to a certain process of camera rendering, or execute it in the middle of a certain process of light rendering. You can also call the Graphics.ExecuteCommandBuffer method to execute these commands immediately.

Two, the execution process of CommandBuffer

1. Normal rendering process

  If we open the FrameDebug tool that comes with Unity, we can see the entire rendering process.
For example, in Deferred rendering, the process is as follows:
insert image description here

If it is Forward forward rendering, the process will be as shown below
insert image description here
insert image description here
insert image description here

  If you select one of the steps, you can see what picture is rendered in this step. It is very helpful for us to understand the rendering by looking at this process step by step.

2. The process of inserting CommandBuffer

  By adding the CommandBuffer command, what we can do is to insert the rendering processing we want in one of the steps of the above rendering process.
  For example, I intercept a piece of code here:

        cmd1 = new CommandBuffer();
        cmd1.name = "AzhaoDrawLightObj";

        cmd1.SetRenderTarget(rt1);
        cmd1.ClearRenderTarget(true, true, Color.clear);

        for (int i = 0; i < renders.Length; i++)
        {
            Renderer item = renders[i];
            if (item.gameObject.activeInHierarchy == false || item.enabled == false)
            {
                continue;
            }
            if(item.gameObject.layer == 8)
            {
                cmd1.DrawRenderer(item, whiteMat);
            }
            else
            {
                cmd1.DrawRenderer(item, blackMat);
            }
            
        }

        cmd1.Blit(rt1, rt2);
        for (int i = 0; i < iterations; i++)
        {
            blurMat.SetFloat("_BlurSize", 1.0f + i * blurSpread);

            cmd1.Blit(rt2, rt3, blurMat, 0); 

            cmd1.Blit(rt3, rt2, blurMat, 1); 

        }
        cmd1.Blit(rt2, rt1);
        comMat.SetTexture("_AddTex", rt1);
        cam.AddCommandBuffer(CameraEvent.BeforeImageEffects, cmd1);

  Here I created a CommandBuffer command list named AzhaoDrawLightObj, and then I plan to render a Cube cube that will emit light in the middle of the two balls just now, so I added a lot of commands to this CommandBuffer called AzhaoDrawLightObj. What these commands are used for, there is no need to delve into them now, and I will write a separate article to explain them later.
There are only two paragraphs to focus on here:
the first paragraph:

cmd1 = new CommandBuffer();
cmd1.name = "AzhaoDrawLightObj";
cmd1.SetRenderTarget(rt1);
cmd1.ClearRenderTarget(true, true, Color.clear);

  This section is to create a CommandBuffer, give it a name, then set the RenderTexture for target rendering, and clean up the second
section of this RenderTexture:

cam.AddCommandBuffer(CameraEvent.BeforeImageEffects, cmd1);

  In this line of code, cam is a Camera camera. Through the AddCommandBuffer method, I add the CommandBuffer I just created to the camera. CameraEvent.BeforeImageEffects specifies when the CommandBuffer takes effect.
Look back at the rendering process of FrameDebug:
insert image description here

  You will find that in the rendering process, the process of BeforeImageEffects is inserted, and you can see the process of AzhaoDrawLightObj that I just added.
insert image description here

  Now in the middle of the two balls, there is a single cube that will be let go. Using CommandBuffer is relatively free. You can control the rendering of a single object, and you can control the processing of specific rendering at a certain time. These degrees of freedom are difficult to achieve with fixed PostProcessing.
insert image description here

  At this time, you can see which CommandBuffers have been added on the camera.
  It is worth noting that to add CommandBuffer to the camera, you only need to add it once in the OnEnable life cycle, and it will take effect all the time. Do not add it in Update, otherwise you will find that there are a lot of repeated additions.
This is the consequence of calling Add in Update:
insert image description here

Since it is added in OnEnable, remember to delete it in OnDisable.

void OnDisable()
{
    if (cmd1!=null)
    {
        cmd1.Dispose();
        cmd1 = null;
    }
}

3. Add the event description of CommandBuffer

  What is explained here is the CameraEvent event. In different rendering modes, the used CameraEvent event is different, so don't use it wrong. In addition to CameraEvent and LightEvent,
  the following is the description of the execution order of CameraEvent and LightEvent in Unity's official documentation.

1、Deferred rendering path

CameraEvent.BeforeGBuffer
Unity renders opaque geometry
CameraEvent.AfterGBuffer
Unity resolves depth.
CameraEvent.BeforeReflections
Unity renders default reflections, and Reflection Probe reflections.
CameraEvent.AfterReflections
Unity copies reflections to the Emissive channel of the G-buffer.
CameraEvent.BeforeLighting
Unity renders shadows. See LightEvent order of execution.
CameraEvent.AfterLighting
CameraEvent.BeforeFinalPass
Unity processes the final pass.
CameraEvent.AfterFinalPass
CameraEvent.BeforeForwardOpaque (only called if there is opaque geometry that cannot be rendered using deferred)
Unity renders opaque geometry that cannot be rendered with deferred rendering.
CameraEvent.AfterForwardOpaque (only called if there is opaque geometry that cannot be rendered using deferred)
CameraEvent.BeforeSkybox
Unity renders the skybox
CameraEvent.AfterSkybox
Unity renders halos.
CameraEvent.BeforeImageEffectsOpaque
Unity applies opaque-only post-processing effects.
CameraEvent.AfterImageEffectsOpaque
CameraEvent.BeforeForwardAlpha
Unity renders transparent geometry, and UI Canvases with a Rendering Mode of Screen Space - Camera
.
CameraEvent.AfterForwardAlpha
CameraEvent.BeforeHaloAndLensFlares
Unity renders lens flares.
CameraEvent.AfterHaloAndLensFlares
CameraEvent.BeforeImageEffects
Unity applies post-processing effects.
CameraEvent.AfterImageEffects
CameraEvent.AfterEverything
Unity renders UI Canvases with a Rendering Mode that is not Screen Space - Camera.

2、Forward rendering path

CameraEvent.BeforeDepthTexture
Unity renders depth for opaque geometry.
CameraEvent.AfterDepthTexture
CameraEvent.BeforeDepthNormalsTexture
Unity renders depth normals for opaque geometry.
CameraEvent.AfterDepthNormalsTexture
Unity renders shadows. See LightEvent order of execution.
CameraEvent.BeforeForwardOpaque
Unity renders opaque geometry.
CameraEvent.AfterForwardOpaque
CameraEvent.BeforeSkybox
Unity renders the skybox.
CameraEvent.AfterSkybox
Unity renders halos.
CameraEvent.BeforeImageEffectsOpaque
Unity applies opaque-only post-processing effects.
CameraEvent.AfterImageEffectsOpaque
CameraEvent.BeforeForwardAlpha
Unity renders transparent geometry, and UI Canvases with a Rendering Mode of Screen Space - Camera.
CameraEvent.AfterForwardAlpha
CameraEvent.BeforeHaloAndLensFlares
Unity renders lens flares.
CameraEvent.AfterHaloAndLensFlares
CameraEvent.BeforeImageEffects
Unity applies post-processing effects.
CameraEvent.AfterImageEffects
CameraEvent.AfterEverything
Unity renders UI Canvases with a Rendering Mode that is not Screen Space - Camera.

3、LightEvent

LightEvent.BeforeShadowMap
LightEvent.BeforeShadowMapPass
Unity renders all shadow casters for the current Pass
LightEvent.AfterShadowMapPass
Unity repeats the last three steps, for each Pass
LightEvent.AfterShadowMap
LightEvent.BeforeScreenSpaceMask
Unity gathers the shadow map into a screen space buffer and performs filtering *AfterScreenSpaceMask

Guess you like

Origin blog.csdn.net/liweizhao/article/details/131692403