Analysis of ijkplayer player (five) video decoding thread analysis

1. Introduction:
In the previous blog, the audio decoding and output were analyzed together. The article is long and complicated. Considering that video rendering and synchronization are also a key analysis point, this blog only analyzes video decoding. relevant content. Because ijkplayer and FFmpeg have a lot of shared codes in the processing of audio and video, and explained in detail enough in the previous blog, the analysis of video decoding is directly analyzed with key codes.

2. Analysis of MediaCodec decoding path:
Let’s first look at the path related to video decoding. ijkplayer has an option called “async-init-decoder”, which can be set to the bottom layer through the upper layer apk. I am not particularly clear about the meaning of this option. Generally, it is not set. So, at ijkplayer's place the value is 0 when read, ie ffp->async_init_decoder.
Let's take a look at where ijkplayer creates vdec:

stream_component_open@ijkmedia\ijkplayer\ff_ffplay.c:

    case AVMEDIA_TYPE_VIDEO:
        is->video_stream = stream_index;
        is->video_st = ic->streams[stream_index];
		/* 通常默认为0,走下面的else */
        if (ffp->async_init_decoder) {
    
    
			...
        } else {
    
    
        	/* 初始化 */
            decoder_init(&is->viddec, avctx, &is->videoq, is->continue_read_thread);
            /* 打开vdec */
            ffp->node_vdec = ffpipeline_open_video_decoder(ffp->pipeline, ffp);
            if (!ffp->node_vdec)
                goto fail;
        }
        /* 开启解码 */
        if ((ret = decoder_start(&is->viddec, video_thread, ffp, "ff_video_dec")) < 0)
            goto out;

        is->queue_attachments_req = 1;

		/* 这里是关于帧率设置的判断,略过 */
		...

        break;

If the option is not actively set above, the code will go to the else. First decoder_init, the most important operation is to bind the queue of the packet to the decoder. Next look
ffpipeline_open_video_decoder:

IJKFF_Pipenode* ffpipeline_open_video_decoder(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
    
    
    return pipeline->func_open_video_decoder(pipeline, ffp);
}

The pipeline is created as follows:

ijkmp_android_create@ijkmedia\ijkplayer\android\ijkplayer_android.c:

IjkMediaPlayer *ijkmp_android_create(int(*msg_loop)(void*))
{
    
    
	...
    mp->ffplayer->pipeline = ffpipeline_create_from_android(mp->ffplayer);
    if (!mp->ffplayer->pipeline)
        goto fail;
	...
}

Find what the function pointer points to:

pipeline->func_open_video_decoder   = func_open_video_decoder;
func_open_video_decoder@ijkmedia\ijkplayer\android\ijkplayer_android.c:

static IJKFF_Pipenode *func_open_video_decoder(IJKFF_Pipeline *pipeline, FFPlayer *ffp)
{
    
    
    IJKFF_Pipeline_Opaque *opaque = pipeline->opaque;
    IJKFF_Pipenode        *node = NULL;
	/* 走Mediacodec的硬解码 */
    if (ffp->mediacodec_all_videos || ffp->mediacodec_avc || ffp->mediacodec_hevc || ffp->mediacodec_mpeg2)
        node = ffpipenode_create_video_decoder_from_android_mediacodec(ffp, pipeline, opaque->weak_vout);
    /* 走FFmpeg的软解 */
    if (!node) {
    
    
        node = ffpipenode_create_video_decoder_from_ffplay(ffp);
    }

    return node;
}

mediacodecIt can be seen that video decoding can be used or performed in the code FFmpeg. Of course, the selection of mediacodec decoding requires the judgment condition in if to allow the upper layer to set:

@ijkmedia\ijkplayer\ff_ffplay_options.h:

    // Android only options
    {
    
     "mediacodec",                             "MediaCodec: enable H264 (deprecated by 'mediacodec-avc')",
        OPTION_OFFSET(mediacodec_avc),          OPTION_INT(0, 0, 1) },
    {
    
     "mediacodec-all-videos",                  "MediaCodec: enable all videos",
        OPTION_OFFSET(mediacodec_all_videos),   OPTION_INT(0, 0, 1) },
    {
    
     "mediacodec-avc",                         "MediaCodec: enable H264",
        OPTION_OFFSET(mediacodec_avc),          OPTION_INT(0, 0, 1) },
    {
    
     "mediacodec-hevc",                        "MediaCodec: enable HEVC", 
        OPTION_OFFSET(mediacodec_hevc),         OPTION_INT(0, 0, 1) },       

We only study hard decoding ffpipenode_create_video_decoder_from_android_mediacodec:

IJKFF_Pipenode *ffpipenode_create_video_decoder_from_android_mediacodec(FFPlayer *ffp, IJKFF_Pipeline *pipeline, SDL_Vout *vout)
{
    
    
	...
    IJKFF_Pipenode *node = ffpipenode_alloc(sizeof(IJKFF_Pipenode_Opaque));
    if (!node)
        return node;	
	...
    node->func_destroy  = func_destroy;
    /* 上层没有设置这个option将走else分支 */
    if (ffp->mediacodec_sync) {
    
    
        node->func_run_sync = func_run_sync_loop;
    } else {
    
    
        node->func_run_sync = func_run_sync;
    }
    node->func_flush    = func_flush;
    opaque->pipeline    = pipeline;
    opaque->ffp         = ffp;
    opaque->decoder     = &is->viddec;
    opaque->weak_vout   = vout;	
    ...
}

After confirming the important function pointers, we will go to the decoding thread of vdec next. Look at decoder_startthe input parameters below, that is, the decoding thread video_thread:

static int video_thread(void *arg)
{
    
    
    FFPlayer *ffp = (FFPlayer *)arg;
    int       ret = 0;

    if (ffp->node_vdec) {
    
    
        ret = ffpipenode_run_sync(ffp->node_vdec);
    }
    return ret;
}

The reason why I spent a lot of space on the analysis node_vdecis to confirm ffpipenode_run_sync:

int ffpipenode_run_sync(IJKFF_Pipenode *node)
{
    
    
    return node->func_run_sync(node);
}

node->func_run_syncpoints to func_run_sync:

static int func_run_sync(IJKFF_Pipenode *node)
{
    
    
	...
	/* 找到mediacodec的解码器之后不会进入这个if */
    if (!opaque->acodec) {
    
    
        return ffp_video_thread(ffp);
    }
    ...
    
    frame = av_frame_alloc();
    if (!frame)
        goto fail;
	/* 1.创建填充数据的线程 */
    opaque->enqueue_thread = SDL_CreateThreadEx(&opaque->_enqueue_thread, enqueue_thread_func, node, "amediacodec_input_thread");
    if (!opaque->enqueue_thread) {
    
    
        ALOGE("%s: SDL_CreateThreadEx failed\n", __func__);
        ret = -1;
        goto fail;
    }
    while (!q->abort_request) {
    
    
		...
        got_frame = 0;
        /* 2.获取outputbuffer */
        ret = drain_output_buffer(env, node, timeUs, &dequeue_count, frame, &got_frame);
        ...
        if (got_frame) {
    
    
        	/* 3.将output picture入队列 */
            ret = ffp_queue_picture(ffp, frame, pts, duration, av_frame_get_pkt_pos(frame), is->viddec.pkt_serial);
            if (ret) {
    
    
                if (frame->opaque)
                    SDL_VoutAndroid_releaseBufferProxyP(opaque->weak_vout, (SDL_AMediaCodecBufferProxy **)&frame->opaque, false);
            }
            av_frame_unref(frame);			
		}
	}    
}

3. Fill data into MediaCodec:
If you are familiar Android MediaCodecwith the operation process, you can see that the above function condenses the entire operation. First, ijkplayer specially creates a thread enqueue_thread_func往mediacodecto fill data:

static int enqueue_thread_func(void *arg)
{
    
    
	....
    while (!q->abort_request && !opaque->abort) {
    
    
        ret = feed_input_buffer(env, node, AMC_INPUT_TIMEOUT_US, &dequeue_count);
        if (ret != 0) {
    
    
            goto fail;
        }
    }
	...
}

If the buffer queue does not stop receiving data, feed_input_bufferthe function will be called all the time:

static int feed_input_buffer(JNIEnv *env, IJKFF_Pipenode *node, int64_t timeUs, int *enqueue_count)
{
    
    
	...
   /* 从mediacodec出列一个inputbuff的index */
   input_buffer_index = SDL_AMediaCodec_dequeueInputBuffer(opaque->acodec, timeUs);
   ...
   /* 将packet中的待解码数据写入到mediacodec中 */
   copy_size = SDL_AMediaCodec_writeInputData(opaque->acodec, input_buffer_index, d->pkt_temp.data, d->pkt_temp.size);
   ...
   /* 数据写完之后将inbuffer入列等待解码 */
   amc_ret = SDL_AMediaCodec_queueInputBuffer(opaque->acodec, input_buffer_index, 0, copy_size, time_stamp, queue_flags);      
}

Interested students can see how JNI is reflected to the java layer. The main function of this function is to dequeue the available inputbuffer from mediacodec, then write the source data in the packet to mediacodec, and then enqueue it.

4. Take out data from MediaCodec:
Next, let's take a look at how ijkplayer processes the decoded data of mediacodec.
Back func_run_sync, first look at the while loop drain_output_buffer:

static int drain_output_buffer(JNIEnv *env, IJKFF_Pipenode *node, int64_t timeUs, int *dequeue_count, AVFrame *frame, int *got_frame)
{
    
    
	...
    int ret = drain_output_buffer_l(env, node, timeUs, dequeue_count, frame, got_frame);
	...
}
static int drain_output_buffer_l(JNIEnv *env, IJKFF_Pipenode *node, int64_t timeUs, int *dequeue_count, AVFrame *frame, int *got_frame)
{
    
    
	...
    /* 从mediacodec的buffer队列中出列可用的buffer index */
    output_buffer_index = SDL_AMediaCodecFake_dequeueOutputBuffer(opaque->acodec, &bufferInfo, timeUs);
    if (output_buffer_index == AMEDIACODEC__INFO_OUTPUT_BUFFERS_CHANGED) {
    
    
        ALOGI("AMEDIACODEC__INFO_OUTPUT_BUFFERS_CHANGED\n");
        // continue;
    }
    ...
    else if (output_buffer_index >= 0)
    {
    
    
		...
		if (opaque->n_buf_out)
		{
    
    
			...
		}
		/* 进入else分支进行数据的copy */
		else
		{
    
    
			ret = amc_fill_frame(node, frame, got_frame, output_buffer_index, SDL_AMediaCodec_getSerial(opaque->acodec), &bufferInfo);
		}
	} 
}

This function is very long. On the surface 仅仅是通过调用mediacodec拿到可以使用的outputbuffer的可用index, it still needs to find the buffer to perform the copy operation. Take a look at amc_fill_framethe function:

static int amc_fill_frame(
    IJKFF_Pipenode            *node,
    AVFrame                   *frame,
    int                       *got_frame,
    int                        output_buffer_index,
    int                        acodec_serial,
    SDL_AMediaCodecBufferInfo *buffer_info)
{
    
    
    IJKFF_Pipenode_Opaque *opaque     = node->opaque;
    FFPlayer              *ffp        = opaque->ffp;
    VideoState            *is         = ffp->is;
	
	/* 搞了一个代理 */
    frame->opaque = SDL_VoutAndroid_obtainBufferProxy(opaque->weak_vout, acodec_serial, output_buffer_index, buffer_info);
    if (!frame->opaque)
        goto fail;

    frame->width  = opaque->frame_width;
    frame->height = opaque->frame_height;
    frame->format = IJK_AV_PIX_FMT__ANDROID_MEDIACODEC;
    frame->sample_aspect_ratio = opaque->codecpar->sample_aspect_ratio;
    frame->pts    = av_rescale_q(buffer_info->presentationTimeUs, AV_TIME_BASE_Q, is->video_st->time_base);
    if (frame->pts < 0)
        frame->pts = AV_NOPTS_VALUE;
    // ALOGE("%s: %f", __func__, (float)frame->pts);

    *got_frame = 1;
    return 0;
fail:
    *got_frame = 0;
    return -1;
}

Attention should be paid to this place. ijkplayer has set up a proxy to perform data manipulation, but traced down the code and found no place for copying. It just made an assignment buffer_index:

proxy->buffer_index  = buffer_index;

Then it means that the copy operation will be performed later.
Going back to the outer func_run_syncfunction, take a look ffp_queue_picture:

int ffp_queue_picture(FFPlayer *ffp, AVFrame *src_frame, double pts, double duration, int64_t pos, int serial)
{
    
    
    return queue_picture(ffp, src_frame, pts, duration, pos, serial);
}

queue_pictureThis function is in ff_ffplay.c, this function ijkplayer is also written very complicated, only the key points are as follows:

static int queue_picture(FFPlayer *ffp, AVFrame *src_frame, double pts, double duration, int64_t pos, int serial)
{
    
    
	...
	/* 出队列一帧可写的frame等待填充 */
    if (!(vp = frame_queue_peek_writable(&is->pictq)))
        return -1;
    ...
    /* 进行数据拷贝:src_frame->vp->bmp */
    if (SDL_VoutFillFrameYUVOverlay(vp->bmp, src_frame) < 0) {
    
    
        av_log(NULL, AV_LOG_FATAL, "Cannot initialize the conversion context\n");
        exit(1);
    }
    ...
    /* 将picture推入队列 */
	frame_queue_push(&is->pictq);
	...
}

After looking around for a long time, I finally found the place for the final copy. After enqueuing the video frames decoded by mediacodec, the next step is to synchronize and render.

Guess you like

Origin blog.csdn.net/achina2011jy/article/details/115895624