Android Qcom Display Learning (12)

Links to the general catalog of this series of articles and introductions to each part: Android Qcom Display Learning (Zero)
This chapter is mainly based on dumping the data of GraphicBuffer for GPU rendering or GPU synthesis or HWC synthesis on the Qualcomm platform.

At first, I saw such a video on station B, which can dump the data display system principle and graphics system debugging of each Layer . I saw that the Android Framework comes with
the Layer::dump method, but it does not exist on Android S. , When I found the transplant in Google's history, I found that the Android version has changed a lot in Layer.cpp
https://android.googlesource.com/platform/frameworks/native/+/e64a79c/services/surfaceflinger/Layer.cpp https://android.googlesource.com/platform/frameworks/native/+/e64a79c/services/surfaceflinger/Layer.cpp
https ://android.googlesource.com/platform/frameworks/native/+/e64a79c/services/surfaceflinger/SurfaceFlinger.cpp

So I wrote a dump method myself, and found that the data obtained through getBuffer() in Layer.cpp has always been empty, and I wanted to obtain GraphicBuffer through layerSettings.source.buffer.buffer , and finally found frameworks/native/ SkiaGLRenderEngine::drawLayers in libs/renderengine/skia/SkiaGLRenderEngine.cpp

static void dumpBuffer(void* addr, uint32_t w, uint32_t h, uint32_t s, PixelFormat f){
    
    
    if( !addr){
    
    
        ALOGE("Addr is NULL");
        return;
    }

    static int count = 0;
    char filename[100];
    memset(filename, 0, sizeof(filename));
    sprintf(filename, "/data/dump/layer_gpu_%d_frame_%d_%d_%d.rgb", count, w, h, f);
    ALOGD("dump GraphicBUffer to RGB file:%s",filename);

    count ++;
    int fd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0664);
    if(fd < 0) return;
    void* base = addr;
    //uint32_t c= dataSpaceToInt(d);
    //write(fd, &w, 4);
    //write(fd, &h, 4);
    //write(fd, &f, 4);
    //write(fd, &c, 4);
    size_t Bpp = bytesPerPixel(f);
    for (size_t y = 0; y < h; y++) {
    
    
        write(fd, base, w*Bpp);
        base = (void *)((char *)base + s*Bpp);
    }
   close(fd);
}

static void dumpLayers(const sp<GraphicBuffer>& target){
    
    
    void *addr = NULL;

    uint32_t w, s, h, f;
    w = target->getWidth();
    h = target->getHeight();
    s = target->getStride();
    f = target->getPixelFormat();

    int result = target->lock(GraphicBuffer::USAGE_SW_READ_OFTEN, &addr);
    if(result < 0){
    
    
        ALOGE("lock buffer failed: %d", result);
    }
    dumpBuffer(addr, w, h, s, f);
}

Get the dumped GraphicBuffer and find that only star points are not a normal picture on 7YUV and YUVIEW. In RenderSurface::queueBuffer , dump the GPU to synthesize the GraphicBuffer of Layers, and find that it is also a star point. Of course, this is also expected, and the data of Layers is not correct. After synthesis, there must be problems.
I read this big guy's blog Android Image Display System - Introduction to the Method of Exporting Layer Data (dump GraphicBuffer raw data)

static void dumpGraphicRawData2file(const native_handle_t* bufferHandle,
                                    uint32_t width, uint32_t height,
                                    uint32_t stride, int32_t format)
{
    
    
    ALOGE("%s [%d]", __FUNCTION__, __LINE__);

    static int sDumpCount = 0;

    if(bufferHandle != nullptr) {
    
    
        int shareFd = bufferHandle->data[0];
        unsigned char *srcAddr = NULL;
        uint32_t buffer_size = stride * height * bytesPerPixel(format);
        srcAddr = (unsigned char *)mmap(NULL, buffer_size, PROT_READ, MAP_SHARED, shareFd, 0);

        char filename[100];
        memset(filename, 0, sizeof(filename));
        sprintf(filename, "data/dump/layers_gpu_composer_%d_frame_%d_%d_%d.raw", sDumpCount, width, height, format);

        int dumpFd = open(filename, O_WRONLY | O_CREAT | O_TRUNC, 0644);

        if(dumpFd >= 0 && srcAddr != NULL) {
    
    
            ALOGE("jerry dumpFd success");
            write(dumpFd, srcAddr, buffer_size);
            close(dumpFd);
        }
        munmap((void*)srcAddr, buffer_size);
    }
}

After searching a lot on the Internet, I realized that there are GPU and HWC synthesis methods. You can use the adb shell dumpsys SurfaceFlinger > SurfaceFlinger.txt

6 Layers
  - Output Layer 0xb400007758f7fa30(Wallpaper BBQ wrapper#0)
        hwc: layer=0x088 composition=DEVICE (2) 
  - Output Layer 0xb400007758f97b00(com.android.launcher3/com.android.searchlauncher.SearchLauncher#0)
        hwc: layer=0x089 composition=DEVICE (2) 
  - Output Layer 0xb400007758f8a620(StatusBar#0)
        hwc: layer=0x087 composition=CLIENT (1) 
  - Output Layer 0xb400007758f87500(NavigationBar0#0)
        hwc: layer=0x08a composition=CLIENT (1) 
  - Output Layer 0xb400007758f9a3f0(ScreenDecorOverlay#0)
        hwc: layer=0x085 composition=CLIENT (1) 
  - Output Layer 0xb400007758f72550(ScreenDecorOverlayBottom#0)
        hwc: layer=0x084 composition=CLIENT (1) 

Add the GraphicBuffer in the dump HWC to BufferStateLayer::setBuffer , and found that it is still abnormal. Later, in the Qualcomm document, it was found that Qualcomm already had a dump Layers method, so I wanted to see if there was any problem.

adb root
adb shell setprop vendor.gralloc.disable_ubwc 1
adb shell setprop vendor.gralloc.enable_fb_ubwc 0
adb shell stop vendor.gralloc-2-0
adb shell start vendor.gralloc-2-0
adb shell stop vendor.qti.hardware.display.allocator
adb shell start vendor.qti.hardware.display.allocator
stop;start // Reboot Android frame to make the change effective
adb shell dumpsys SurfaceFlinger 
adb shell setenforce 0

GPU
adb root
adb shell "vndservice call display.qservice 21 i32 2 i32 1 i32 3"
adb pull /data/vendor/display/frame_dump_disp_id_00_builtin/ ./

As a result, the image was found to be normal, and then we dumped our own GraphicBuffer data, and it was found to be normal. The right side is the layer synthesized by GPU (StaturBar and NavigationBar0, etc.), and the left side is the layer image (Launcher) synthesized by non-GPU. Finally, it is ubwc Regarding, ubwc is said to be a proprietary format of Qualcomm, so it may cause problems that the image cannot be viewed normally.
insert image description here
Finally, I will summarize some of the process of sending the display, and explain the location where the above dump is added. The picture comes from the Android 12(S) image display system-GraphicBuffer synchronization mechanism-Fence
insert image description here

VSYNC -> onMessageRefresh -> CompositionEngine::present -> Output::present

(1) writeCompositionState -> writeStateToHWC -> writeOutputIndependentPerFrameStateToHWC -> writeBufferStateToHWC->
		 Layer::setBuffer -> Composer::setLayerBuffer
	合成方式Composition::CLIENT会被Ingored, Device的合成会直接送入HWC	

(2) setColorTransform 
    设置颜色矩阵作用于所有layer, Night Light对应护眼模式, Andorid自带模式
	
(2) prepareFrame -> chooseCompositionStrategy 
    将合成类型是Client还是Device送入FrameBufferSurface, 默认是Client GPU合成
	
(3) finishFrame -> composeSurfaces -> SkiaGLRenderEngine::drawLayers -> RenderSurface::queueBuffer -> 
	    advanceFrame -> RenderSurface::nextBuffer -> setClientTarget
	drawLayers中会判断hasClientComposition,说明是GPU合成,rendyFence同步判断准备好后queueBuffer放入BufferQueue,
	FrameBufferSurface::nextBuffer从BufferQueue中取出后通过setClientTarget送入HWC中

Android 12(S) image display system - GraphicBuffer synchronization mechanism - Fence
Android images are first processed by the CPU into multi-dimensional images and textures (Bitmaps, Drawables are packaged together into a unified Texture texture), and then passed to the GPU. The two hardwares of the GPU are asynchronous. When the CPU calls commands such as OpenGL, the real drawing operation is performed by the GPU. When the drawing ends, the CPU does not know unless it waits for these commands to complete. Therefore, a Fence synchronization mechanism needs to be introduced.
    The purpose of Fence is to control the state of GraphicBuffer, whether to allow resource competition for work. The BufferState state corresponding to the GraphicBuffer explains the ownership of the GraphicBuffer to a certain extent, but it only indicates the state in the CPU, and the real user of the GraphicBuffer is generally the GPU. For example, when the producer puts a GraphicBuffer into the BufferQueue, it only completes the transfer of ownership at the CPU level. But the GPU may still be used, and if it is still used, consumers cannot use it for synthesis. At this time, the relationship between GraphicBuffer and production consumers is more ambiguous. The consumer has the right to own the GraphicBuffer, but has no right to use it. It needs to wait for a signal telling it that the GPU is used up before the consumer really has the right to use it.
    The gist of VSync is to coordinate the working timing between hardware and software. The refresh frame rate of the screen is fixed, so the next frame needs to be prepared every specified time. If it is ready, it will cause a delay in the picture, and one frame will be displayed multiple times. If the preparation is too fast, it will cause the screen to tear and display multiple frames. The Vsync signal is used to control the rhythm of the front and rear buffer switching (that is, the PageFlip switching mechanism in DRM)

Guess you like

Origin blog.csdn.net/qq_40405527/article/details/128697728