Off-Screen Rendering & FBO

What is FrameBuffer?

A color buffer for writing color values, a depth buffer for writing depth information and a stencil buffer that allows us to discard specific fragments based on some conditions. The combination of these buffers is called a frame buffer (Framebuffer), which is stored in memory.
ColorBuffer+DeptBuffer+StencilBuffer=Frame buffer What is FBO?
A frameBuffer object (often called an FBO). Is an attachment point that collects color, depth, and stencil buffers. The status of the description attributes, such as color, depth, and the size and format of the stencil buffer, are all associated with FBO (Frame Buffer Object). What is the function of FBO? Off-screen rendering and delayed rendering are based on FBO.
Depth Attachment Point + Color Attachment Point + Stencil Attachment Point = FBO

Why have FrameBuffer and FBO?

  1. FrameBuffer (frame buffer) is just a memory area. It does not identify which piece of memory is the color, which piece is the depth, and which piece is the template.
  2. The attachment point provided by FBO allows ColorBuffer+DepthBuffer+StencilBuffer to correspond to the memory of FrameBuffer (I understand that it should tell FrameBuffer the address of these buffers).
  3. One FBO corresponds to a piece of memory (FrameBuffer).

FBO is like a warehouse manager, managing the FrameBuffer warehouse (memory), and the goods named ColorBuffer, DeptBuffer, and StencilBuffer are shipped one by one, and FBO provides them with corresponding unloading points (attachment points), which are collected by FBO after receiving Registered, FBO has the ability to manage these goods.

In the OpenGL rendering pipeline, geometric data and textures are converted and tested multiple times, and finally displayed on the screen in the form of two-dimensional pixels. The final rendering destination of the OpenGL pipeline is called a framebuffer. A framebuffer is a collection of two-dimensional arrays and storage areas used by OpenG: color buffer, depth buffer, stencil buffer, and accumulation buffer.

There are two types of "frame buffer associated images": texture images (texture images) and render buffer images (renderbuffer images) . If the texture object's image data is associated with the framebuffer, OpenGL performs a "render to texture" operation. If the image data in the render buffer is associated with the frame buffer, OpenGL performs offscreen rendering.
insert image description here

The figure above shows the connection between framebuffer objects, texture objects, and renderbuffer objects. Multiple texture objects or renderbuffer objects can be associated to a framebuffer object through an association point.

GPU rendering mechanism:

The CPU calculates the display content and submits it to the GPU. After the GPU rendering is completed, the rendering result is put into the frame buffer, and then the video controller reads the data of the frame buffer line by line according to the VSync signal, and passes it to the display for display after possible digital-to-analog conversion. .

There are two methods for GPU screen rendering:
1) On-Screen Rendering
means current screen rendering, which means that the GPU rendering operation is performed in the screen buffer .
2) Off-Screen Rendering

means off-screen rendering, which means that the GPU opens a new buffer outside the current screen buffer for rendering operations.

Special off-screen rendering: If the rendering that is not performed in the current screen buffer of the GPU is called off-screen rendering, then there is another special "off-screen rendering" method: CPU rendering. If we override the drawRect method and use any Core Graphics technology for drawing operations, CPU rendering is involved. The entire rendering process is completed synchronously by the CPU in the App, and the rendered bitmap is finally handed over to the GPU for display.

How to trigger off-screen rendering
When the following properties are set, off-screen rendering will be triggered:

  • shouldRasterize
  • masks
  • shadows
  • edge antialiasing
  • group opacity
  • Complex shapes set rounded corners, etc.
  • gradient

Among them, shouldRasterize (rasterization) is a special one : **Rasterization concept: convert a graph into an image composed of rasters. Rasterization characteristics: each element corresponds to a pixel in the frame buffer.
shouldRasterize = YES will cache the rasterized content while other attributes trigger off-screen rendering. If the corresponding layer and its sublayers have not changed, they can be reused directly in the next frame. shouldRasterize = YES, this will implicitly create a bitmap, various shadow masks and other effects will also be saved in the bitmap and cached, thereby reducing the frequency of rendering (not vector graphics).
It is equivalent to rasterization, which transfers the operation of the GPU to the CPU, generates a bitmap cache, and directly reads and multiplexes it.
When you use rasterization, you can turn on "Color Hits Green and Misses Red" to check whether rasterization is a good choice for this scene. Green indicates that the cache is being reused, and red indicates that the cache is being recreated.
If the rasterized layer turns red too often then rasterization may not be of much use for optimization. The bitmap cache is deleted from memory and recreated too often, red indicates that the cache was rebuilt too late. You can choose a smaller and deeper layer structure for rasterization in an attempt to reduce rendering time.

Multi-rendering MRT

(Multiple render pass, colorbuffer depthbuffer stencilbuffer) Allows the application to render to multiple buffers at one time, the fragment shader can output multiple colors, which can be used to save RGBA color, normal, depth information or texture coordinates , each color connection A color buffer. is a new feature of OpenGL ES 3.0 that allows an application to render to multiple buffers at once. Using MRT technology, MRT technology is commonly used in graphics and image algorithms. It is mainly used to obtain the intermediate results of the algorithm, basemap or Mask, and is also used in a variety of advanced rendering algorithms, such as deferred shading and fast environmental occlusion estimation .
insert image description here

References:

Guess you like

Origin blog.csdn.net/chenweiyu11962/article/details/125512979