android video output

  There are many ways to output video images under Android, each of which has its own characteristics. For example, sending video data back to the Java layer and then using lockCanvas to draw it is slow.

    The above is not worth advocating at all. It takes a lot of time to transfer data from the native layer to the jni layer.

    When developing a ffmpeg-based player, you can use various soft decoders of ffmpeg, or you can use the OMXCodec decoder with android. The OMXCodec decoder is a package for OMX. Among them, the ffmpeg decoder mainly outputs YUV420P frames, while the output format of the OMXCodec decoder is diverse, which varies depending on the specific platform.

    The video decoded by the OMXCodec decoder can be output by itself, as long as the decoder is turned on. Time to pass it an ANativeWindow:
[cpp] view plain copy
print? View the code slice on CODE derived from my code slice

    sp<MediaSource> OMXCodec::Create( 
            const sp<IOMX> &omx, 
            const sp<MetaData> &meta, bool createEncoder, 
            const sp<MediaSource> &source, 
            const char *matchComponentName, 
            uint32_t flags, 
            <strong>const sp<ANativeWindow> &nativeWindow) 

    decoding function reads MediaBuffer, which is actually GraphicBuffer, you don't need to care about its format, just queueBuffer it, then set the render to complete, and release it:


[cpp] view plain copy
print? View code slice on CODE derived to my code slice

    g_ANativeWindow->queueBuffer (g_ANativeWindow.get(),mbuf->graphicBuffer()->getNativeBuffer()); 
    ... 
    mbuf->meta_data()->setInt32 (kKeyRendered, 1); 
    ... 
    mbuf->release();//This is to tell OMXCodec 


    It is worth noting that when creating OMXCodec, you can also pass in kKeyColorFormat to specify the desired format, but unfortunately , sometimes it doesn't support what you want. For example, some mobile phones do not support HAL_PIXEL_FORMAT_YV12, but use an internal format. Let's tangle, so it's better to let it output by itself.

    When using the ffmpeg decoder, currently As far as I know, there are two ways to output YUV420P, one is to write YUV data through Lock ANativeWindow, and the other is to convert YUV data to RGB output by formula in fragment shader through opengles.

    Referring to AwesomePlayer.cpp, you will find that there is a SoftwareRenderer. This class can output HAL_PIXEL_FORMAT_YV12. The conversion of YUV420P->YV12 is actually very simple, that is, the storage location of the UV component is different, and it can be changed when copying. This is Lock The method of ANativeWindow. There is also an AwesomeNativeWindowRenderer in it, which is actually the method mentioned above to let OMXCodec output video by itself.

    I found in practice that the SoftwareRenderer of some mobile phones has not been handled well, especially when flashing third-party ROMs, the screen will be blurry , so you need to manually extract the code from SoftwareRenderer.cpp yourself, which is actually to operate some functions of ANativeWindow:
[cpp] view plain copy
print? View the code slice on CODE derived from my code slice

    native_window_api_connect(gDirectRendererContext.anativeWindow.get() ,NATIVE_WINDOW_API_CPU); 
    native_window_set_usage(gDirectRendererContext.anativeWindow.get(),GraphicBuffer::USAGE_SW_WRITE_OFTEN|GraphicBuffer::USAGE_HW_TEXTURE); 
    native_window_set_buffers_geometry(gDirectRendererContext.anativeWindow.get(),w,h,android_fmt); 
    native_window_set_crop(gDirectRendererContext.anativeWindow.get(),&android_crop); 
    native_window_set_scaling_mode(gDirectRendererContext.anativeWindow.get(),NATIVE_WINDOW_SCALING_MODE_SCALE_TO_WINDOW); 
    ANativeWindow_lock(gDirectRendererContext.anativeWindow.get(), &buffer, NULL); 
    //write data 
    ANativeWindow_unlockAndPost(gDirectRendererContext. .get()); 

    The above code omits a lot, just a basic process. There is also a gl2_yuvtex.cpp in the android source code, which has a method of outputting YUV, which is to initialize egl and opengles, and then create GraphicBuffer to write Data, create EGLImageKHR according to GraphicBuffer, and finally call the extension GL_OES_EGL_image_external of opengles to output YUV. I also attribute this method to the LockAnativeWindow method, because this method is the underlying implementation above.
    The advantage of the Lock AnativeWindow method is that the bottom layer has platform implementation, which is very fast. The disadvantage is that after lockBuffer, we can get the number of bytes of the y component, that is, the stride, but there is no way to get the stride of the uv component. It should be y_stride/2 and then aligned by 16 bytes, but some manufacturers do not play cards according to common sense, this value is not necessarily the case, the result is a Huaping or memory Abort, so it has to be adapted to the mobile phone, trouble!

    Let's talk about using opengles The method of converting YUV to RGB output has been used for a long time. At the earliest, software was used to convert YUV to RGB output, but the efficiency was too low, so the great gods turned their attention to the programmable opengles Fragment shader, put YUV conversion into shader, hardware processing is very fast, which greatly improves efficiency, the specific method is: initialize EGL (refer to gl2_yuvtex.cpp) -> create shader program and compile Connect -> create and upload texture -> draw triangle. Here are the vertices and texture coordinates, shader code:
[cpp] view plain copy
print? View code slice on CODE Derive to my code slice

    static float squareV[]={ 
        -1.0f,-1.0f, 
        1.0f,-1.0f, 
        1.0f,1.0f, 
        -1.0f,1.0f 
    }; 
    static float coordV[]={ 
        0.0f,1.0f, 
        1.0f,1.0f, 
        1.0f,0.0f, 
        0.0f,0.0f 
    }; 
    static char fragmentShaderCode_yuv420[]= 
                "precision mediump float;" 
                "uniform sampler2D tex_y;" 
                "uniform sampler2D tex_u;" 
                "uniform sampler2D tex_v;" 
                "varying vec2 tc;" 
                "void main(){" 
                " mediump vec3 yuv;" 
                " mediump vec3 rgb;" 
                " yuv.x=texture2D(tex_y,tc).r-0.0625;" 
                " yuv.y=texture2D(tex_u,tc).r-0.5;" 
                " yuv.z=texture2D(tex_v,tc).r-0.5;" 
                " rgb.r=1.164*yuv.x + 1.596*yuv.z;" 
                " rgb.g=1.164*yuv.x - 0.813*yuv.z - 0.392*yuv.y;" 
                " rgb.b=1.164*yuv.x + 2.017*yuv.y;"   
                " gl_FragColor=vec4(rgb,1 );" 
                "}"; 
    static char vertexShaderCode[]= 
                "attribute vec4 vertexPosition;" 
                "attribute vec2 vertexCoordinate;" 
                "varying vec2 tc;" 
                "void main(){" 
                " gl_Position=vertexPosition;" 
                " tc=vertexCoordinate;" 
                "}"; 

    It should be noted that EGLContext is related to threads and is the state machine of opengles. Use this method to pay attention to, on some old machines, Using glTexImage2D or glTexSubImage2D to update textures is very slow. A frame of 720x576 can take up to 80ms. In this case, don't use this method to render unless you can give up the sharpness. within ms.

    There are other ways to use opengles for rendering, that is to use the OES extension. There is an example gl2_yuv.cpp in the Android source code, which can be used for reference. There is no problem with uploading textures using glEGLImageTargetTexture2DOES, which is generally very fast, but the compatibility is not good. The reasons are as follows: Direct rendering with nativewindow is the same.

From: http://blog.csdn.net/alien75/article/details/41078963

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326268558&siteId=291194637