FFmpeg pixel format conversion (YUV rotation RGB) display and use SurfaceView

Disclaimer: This article is a blogger original article, shall not be reproduced without the bloggers allowed. https://blog.csdn.net/myvest/article/details/90717333

1, FFmpeg pixel format conversion

FFmpeg pixel conversion is generally used to libswscale

Interface Description

1, obtaining context SwsContext
generally we use the following two functions to obtain, sws_getCachedContext sws_getContext is slightly different and, if the width / height / constant input and output format, context previously created is returned.
Parameters:
The first three parameters becomes divided original width, height, format (e.g. RGBA8888, YUV420, etc.); a width parameter after three converted, high format; conversion algorithm used to specify the flags parameter; the last three NULL can be specified as a parameter.

struct SwsContext *sws_getContext(int srcW, int srcH, enum AVPixelFormat srcFormat,
                                  int dstW, int dstH, enum AVPixelFormat dstFormat,
                                  int flags, SwsFilter *srcFilter,
                                  SwsFilter *dstFilter, const double *param);
struct SwsContext *sws_getCachedContext(struct SwsContext *context,
                                        int srcW, int srcH, enum AVPixelFormat srcFormat,
                                        int dstW, int dstH, enum AVPixelFormat dstFormat,
                                        int flags, SwsFilter *srcFilter,
                                        SwsFilter *dstFilter, const double *param);

2, the transfer function sws_scale
Parameters:
Parameters struct SwsContext * c: context.
Parameter const uint8_t * const srcSlice []: input data, a data pointer, the data points for each channel. Generally decoded from the frame-> data as a parameter.
Parameter const int srcStride []: channel number of bytes per row. Generally may be decoded in frame-> linesize As this parameter.

stride define the start of the next line. stride width and not necessarily the same, because:
1) due to alignment data stored in the frame, there may increase the number of such padding bytes to stride = width + N after each row;
2) the packet color space, each pixel several data channels are mixed together, for example the RGB24, 3 bytes per pixel stored in a row, so that the position of the next line need to be skipped 3 * width bytes.
srcSlice the same dimensions and the srcStride to the srcFormat value.

csp 维数 宽width 跨度stride 高 YUV420 3 w,
w/2, w/2 s, s/2, s/2 h, h/2, h/2 YUYV 1 w, w/2, w/2
2s, 0, 0 h, h, h NV12 2 w, w/2, w/2 s, s, 0
h, h/2 RGB24 1 w, w, w 3s, 0, 0 h, 0, 0

Parameter int srcSliceY, int srcSliceH: definition processing on the input image area, srcSliceY a starting position, srcSliceH how many line processing. If srcSliceY = 0, srcSliceH = height, showing the disposable processing the entire image.
This arrangement is a multi-threaded parallel to, for example, creates two threads, the first thread processing [0, h / 2-1] line, a second processing thread [h / 2, h-1 ] line. Parallel processing speed.

Parameter uint8_t * const dst [], const int dstStride [] define the output image information (data output pointer of each channel, each channel octet number)
of these two parameters can be used av_image_alloc obtain, based on an output of the pixel can be self-defined format.

int sws_scale(struct SwsContext *c, const uint8_t *const srcSlice[],
              const int srcStride[], int srcSliceY, int srcSliceH,
              uint8_t *const dst[], const int dstStride[]);

3, the release function

void sws_freeContext(struct SwsContext *swsContext);

Examples

int FFDecoder::videoConvert(AVFrame* pFrame, uint8_t* out){
    if(pFrame == NULL || out == NULL) 
        return 0;
    
    //AVPicture pict;
    //av_image_alloc(pict.data, pict.linesize, pVCodecCxt->width,pVCodecCxt->height, AV_PIX_FMT_RGBA, 16);
    uint8_t* dst_data[4] = {0};
    char *swsData = new char[720*576*4];
    dst_data[0] =(uint8_t *)swsData;
    
    int dst_linesize[4];
    dst_linesize[0] =  720*4;
    swsContext = sws_getCachedContext(swsContext,pFrame->width, pFrame->height,(AVPixelFormat)pFrame->format,
                                      720, 576, AV_PIX_FMT_RGBA, SWS_FAST_BILINEAR, 0, 0, 0);
    if(swsContext){
        sws_scale(swsContext, (const uint8_t**)pFrame->data, pFrame->linesize, 0, pFrame->height, (uint8_t* const*)dst_data, dst_linesize);
        size_t size = dst_linesize[0] * 576;
        memcpy(out,dst_data[0],size);
        //av_free(&pict.data[0]);
        return size;
    }
    
    //av_free(&pict.data[0]);
    return 0;
}

2, the display using SurfaceView

Video data is converted using SurfaceView display layer processing may JAVA, C ++ layer may be, we are using the C ++ layer is actually used for processing the Android ANativeWindow.
So we need to create JAVA layer SurfaceView, after obtaining surface passed to the JNI layer to obtain ANativeWindow in JNI layer to show through the surface.

JAVA layer created SurfaceView, and incoming surface:
To ensure the surface has been created, in surfaceChanged or surfaceCreated callback function, it will surface to set down.
SurfaceView create Needless to say, you can add the relevant controls in the layout file.
as follows:

    private void initView() {
      mSurfaceView = (SurfaceView)findViewById(R.id.surfaceView); 
      mSurfaceHolder = mSurfaceView.getHolder();
      mSurfaceHolder.addCallback(new Callback() {
      
		@Override
		public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) {
			if(mFFdec != null){
				mFFdec.initSurface(mSurfaceHolder.getSurface());
			}
		}
		@Override
		public void surfaceCreated(SurfaceHolder arg0) {}
		@Override
		public void surfaceDestroyed(SurfaceHolder arg0) {}
		});
    }
    

mFFdec.initSurface (mSurfaceHolder.getSurface ()); a native interface is provided to the JNI surface layer, specifically implemented as follows:

public  native void initSurface(Surface surface);

......仅列出关键代码
static ANativeWindow* g_nwin = NULL;
static void jni_initSurface(JNIEnv* env, jobject obj, jobject surface){
	if(surface){
		g_nwin = ANativeWindow_fromSurface(env, surface);
		if(g_nwin){
			ANativeWindow_setBuffersGeometry(g_nwin, 720, 576, WINDOW_FORMAT_RGBA_8888);
			LOGE("jni_initSurface g_nwin[%p],g_nwin->perform[%p]\n",g_nwin,g_nwin->perform);
		}
	}
    return;
}

static JNINativeMethod gMethods[] = {
    {"decodeInit", "()V", (void*)jni_decodeInit},
    {"decodeDeInit", "()V", (void*)jni_decodeDeInit},
    {"decodeFrame", "([B)I", (void*)jni_decodeFrame},
    {"openInput", "(Ljava/lang/String;)I", (void*)jni_openInput},
    {"getMediaSampleRate", "()I", (void*)jni_getMediaSampleRate},
    {"getMediaType", "()I", (void*)jni_getMediaType},
    {"initSurface", "(Landroid/view/Surface;)V", (void*)jni_initSurface},
};
...省略

ANativeWindow use process

1. Initialization:
calling ANativeWindow_fromSurface acquired from ANativeWindow JAVA incoming Surface layer;
call set ANativeWindow ANativeWindow_setBuffersGeometry parameters, including the width / height / pixel format (and we pay attention to match incoming data).
2, Lock:
ANativeWindow is double buffering mechanism, first call ANativeWindow_lock lock buffer part of the background, and obtain the address of the buffer surface.
3, draw:
to fill the buffer data, call ANativeWindow_unlockAndPost, namely unlocks the buffer and draw.

Examples

static int renderVframe(uint8_t* rgb,int size){
    if(g_nwin == NULL)
        return -1;

    LOGE("renderVframe g_nwin[%p],g_nwin->perform[%p]\n",g_nwin,g_nwin->perform);
    ANativeWindow_Buffer outBuffer;

    ANativeWindow_lock(g_nwin, &outBuffer,0);//获取surface缓冲区的地址
    uint8_t* dst = (uint8_t*)outBuffer.bits;
    memcpy(dst,rgb,size);//往surface缓冲区填充显示的RGB内容
    ANativeWindow_unlockAndPost(g_nwin);

    return 0;
}

Guess you like

Origin blog.csdn.net/myvest/article/details/90717333