SRP batch problem

1) SRP batching problem
​2) Multiple Base cameras render to the same rendering target, mobile platform blurred screen problem
3) Particle system supports GPU Instancing
4) How to modify the separation of scene and UI resolution under URP (no need to change color space)


This is the 327th UWA technical knowledge sharing push, which selects hot topics in the UWA community, covers UWA Q&A, community posts and other technical knowledge points, to help everyone master and learn more comprehensively.

Rendering

Q: In the project, the scenes are all batched by SRP. The requirements for SRP batching are the same Shader and the same Keyword. However, through practice, it is found that there are other factors such as Cull Off or Back that will affect the batching.

So there are 2 questions:
1. In addition to the Cull and Keywords mentioned above, is there any situation that you can't be approved?
2. This kind of cull cannot be batched. Do you have some ingenious experience to combine batches as much as possible without affecting performance (such as changing Cull to Off)?

For the second point, Cull, try to change it to Back, so that the back will be cropped, but sometimes some objects will be Off, so there is no way to solve it. Is there any other solution?

A1: Interspersed positions between objects using different materials will also cause batch failure.

For example, if there are three Shaders in the scene, they are A, B, and C. There are three objects using this shader, and a total of nine objects are randomly placed in the scene, so that their positions are interspersed together. It is reasonable to say that three SRP batches can be done, but they will be divided into 4- 6 SRP batches, because different interleaving sequences will result in different numbers of batches.

At the same time, I would like to ask is there any way to make them complete with only 3 SRP batches even if the positions are interspersed?

Thanks to Fantic-Xush@UWA Q&A community for providing answers

A2: Batching is to submit a setting request for rendering objects that use the same attribute material, which requires that there should be no state changes during the rendering process. These items in your screenshot represent multiple attributes, as long as one of them has Change, it will interrupt the batch. Attributes are also mandatory requirements, and the rest is for the developer to set a custom rendering order.

Batching failures caused by multiple objects due to different positions, if the objects are transparent, the rendering order will be determined according to the distance for the transparency function to be correct. If it is an opaque object, it may also be rendered in advance for rendering efficiency.

Thanks to Li Wei@UWA Q&A Community for providing answers


Rendering

Q: A scene camera (Base) and UI camera (Overlay) are resident in the scene. Sometimes a dynamically loaded Prefab (such as a certain model) comes with a rendering camera (called a dynamic camera later), and the camera mode is Base. .

Because I want to superimpose with the previous rendering result, the Background Type is Uninitialized, which causes the Load Action of the mobile platform rendering target to be DontCare, so the part of the screen that is not rendered by the dynamic camera appears blurry.

But I think, since the Overlay can achieve the correct superposition, then the Base should be able to, so I looked at the source code, and typed some logs, and found that the Load Action of the colorBuffer is indeed Load when the dynamically loaded Base camera is in SetRenderTarget. So I am confused why the mobile platform is still DontCare?

By the way, I would like to ask, is the correct way to do the above requirements is to change the dynamic camera to Overlay, and use the code to put the camera into the CameraStack of the resident scene camera?

In order to further understand, I refer to the method of FinalBlitPass that comes with URP:

But it still doesn't match up in Xcode:

I found that when Blit to an RT that already has content, the LoadAction of RT is Load by default, and RenderTexture.DiscardContents can be used to avoid it under the built-in pipeline. Is there any similar method under URP?

In response to the above problems, experienced friends are welcome to go to the community to exchange and share


Rendering

Q: Can the particle system support GPU Instancing? After doing some examples, I couldn't see GPU Instancing taking effect.

A1: Unity 2018 already supports ParticleSystem GPU Instancing, but it must be in Mesh mode. For details, please refer to this document:
Unity - Manual: Particle System GPU Instancing

This answer was provided by UWA

A2: Is it necessary to implement the particle system with GPU Instancing? The implementation of the particle system is similar to the implementation of the GUI. There is not much difference between putting data on VBO or UBO, and it cannot greatly improve efficiency, and the multi-purpose limit is not strong.

Thanks to Li Wei@UWA Q&A Community for providing answers


Rendering

Q: How to modify the separation of scene and UI resolution under URP (no need to change the color space)?

I haven't used scene linearity and UI Gamma for the time being. I want to simply modify the scene resolution without modifying the UI resolution. I don't want to give the UI a separate Buffer.

Currently looking at the URP source code, Overlay's UI camera directly uses the Buffer of Base's camera.

I saw a solution before, which directly draws the UI to the screen. I imitated FinalBlitPass, and judged in DrawObjectsPass whether the UI camera reset setRenderTarget, but it had no effect, and the UI was not drawn. Is this plan feasible?

DrawObjectsPass.cs :

if (!renderingData.cameraData.camera.CompareTag("UICamera"))
{
    context.DrawRenderers(renderingData.cullResults, ref drawSettings, ref filterSettings, ref m_RenderStateBlock);
}
else
{
    cmd.SetRenderTarget(BuiltinRenderTextureType.CameraTarget,
        RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, // color
        RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare);
    context.ExecuteCommandBuffer(cmd);
    cmd.Clear();
    context.DrawRenderers(renderingData.cullResults, ref drawSettings, ref filterSettings, ref m_RenderStateBlock);
}

A1: You need to render the 3D scene into RT, and then render RT into the UI as the texture of RawImage, so that you can control the rendering resolution of the scene by controlling the resolution of RT.

For more answers, please refer to this Q&A:
Srp Co-batch Questions -- UWA Q&A | Interactive Q&A Community for Game Developers |

Thanks to han@UWA Q&A community for providing answers

A2: SceneCamera and UICamera are responsible for the rendering of the scene and UI, and then modify the URP source code to mount a Component that modifies the RenderScale on each Camera, and the RenderScale of the UI can be kept at 1 or higher.

For the implementation of RenderScale and URP, please refer to: An article on
Render Scale

Unity mentioned that "reducing resolution does not include UI", please refer to:

Zhihu @剂牛的星星 also mentioned "separate scene and UI resolution" in the article, you can refer to:

Thanks to the Coder@UWA Q&A community who will lose the pot for providing answers

The cover image comes from the Internet


That's all for today's sharing. Of course, there is no limit to life but no limit to knowledge. In the long development cycle, these questions you see may be just the tip of the iceberg. We have already prepared more technical topics on the UWA Q&A website, waiting for you to explore and share together. You who love progress are welcome to join, maybe your method can solve the urgent needs of others; and the "stone" of other mountains can also attack your "jade".

Official website: www.uwa4d.com
Official Q&A community: answer.uwa4d.com

 

Guess you like

Origin blog.csdn.net/UWA4D/article/details/129398069