Unity optimization summary miscellaneous articles (2)

一、Unity Vsync 

Reference article: https://blog.csdn.net/liuyizhou95/article/details/81976186 (This article is for record only, please refer to the original author's description for specific ideas)

There are two kinds of VSync signals in the Android system: the hardware VSync generated by the screen and the software Vsync signal converted by SurfaceFlinger. The latter is passed to Choreographer via Binder. 

           Turn off Vsync and use a single-cache workflow, as shown in the figure:


As shown in the figure above, the CPU/GPU generates an image in the Buffer, and the screen fetches the image from the Buffer, refreshed and displayed. This is a typical producer-consumer model. 
The ideal situation is that the frame rate and the refresh frequency are equal, and each frame is drawn, the screen displays one frame. The actual situation is that there is no necessary size relationship between the two. If there is no lock to control synchronization, problems are prone to occur. For example, when the frame rate is greater than the refresh frequency, when the screen has not refreshed the n-1th frame, the GPU is already generating the nth frame, and the data of the n-1th frame is covered from top to bottom. When the screen starts to refresh the first frame At frame n-1, the upper half of the data in the buffer is the data of the nth frame, and the lower half is the data of the n-1th frame, the displayed image will have a significant deviation between the upper and the lower half Phenomenon, we call it "tearing", as shown below: 

           Turn on Vsync: Adopt Double Buffer

Note that the "double buffering" here and the "second-level cache" in the principle of computer organization are two different things. The same is true for triple buffering. 
In order to solve the "tearing" problem of single cache, dual cache and VSync came into being. The double cache model is shown below: 

The two buffer areas are Back Buffer and Frame Buffer. The GPU writes data to the Back Buffer, and the screen reads data from the Frame Buffer. The VSync signal is responsible for scheduling the copy operation from Back Buffer to Frame Buffer, which can be considered to be completed in an instant. In fact, the copy operation is equivalent. In fact, double buffering is implemented by exchanging the names of Back Buffer and Frame Buffer, more specifically, exchanging memory addresses (do you think of the classic written test question: " There are two integer numbers, how to exchange the values ​​of the two in the best way?"), it can be completed by the two-digit operation "and", so it can be considered as an instant completion.

Under the double-buffering model, the workflow is like this: 
At a certain point in time, a screen refresh cycle is completed, and a short refresh blank period is entered. At this time, the VSync signal is generated, the copy operation is completed first, and then the CPU/GPU is notified to draw the next frame of image. After the copy operation is completed, the screen starts the next refresh cycle, that is, the data just copied to the Frame Buffer is displayed on the screen.

In this model, only when the VSync signal is generated, the CPU/GPU will start drawing. In this way, when the frame rate is greater than the refresh frequency, the frame rate will be forced to keep in sync with the refresh frequency, thereby avoiding the "tearing" phenomenon.

Note: When the VSync signal is sent out, if the GPU/CPU is producing frame data, the copy operation will not occur at this time. When the screen enters the next refresh cycle, what is taken out of the Frame Buffer is the "old" data, not the frame data being generated, that is, the two refresh cycles display the same frame of data. This is what we call "Dropped Frame, Skipped Frame, Jank" phenomenon.

Summary: The underlined position is the reason why bloggers encountered turning on vertical synchronization, which will cause slight fluctuations in the frame rate.

           Solution, turn off vertical synchronization, and control the frame rate at the same time.

2. The number of Cameras (as few as possible, or disable it when not in use)

        Every Camera will go Camera.Render, even if nothing is rendered, it will Culling (cone range viewport culling operation)

        Secondly, reducing the number of objects in the scene can effectively reduce the consumption of SceneCulling.

      

Guess you like

Origin blog.csdn.net/LM514104/article/details/112340695