FFmpeg video playback (decapsulation)

A quick
tour of

Video codecs are usually divided into soft codec ffmpeg and hard coded MediaCodec. Hard code has high efficiency and fast speed but poor compatibility. Here we choose FFmpeg. FFmpeg can also integrate other codec libraries, such as x264, faac, lamc, fdkaac, etc. Most of the video website codecs on the market also use FFmpeg to encapsulate and soft-code. The main modules and functions of FFmpeg are introduced below:

libavformat

It is used for the generation and analysis of various audio and video packaging formats, including functions such as obtaining information required for decoding to generate decoding context structures and reading audio and video frames; the audio and video format analysis protocol provides independent audio or video stream analysis for libavcodec Video stream source.

libavcodec

It is used for various types of sound/image codecs; this library is the core of audio and video codecs, and realizes the functions of most of the decoders that can be seen on the market. The libavcodec library is included or applied by other major decoders such as ffdshow and Mplayer .

libavfilter
filter (FileIO, FPS, DrawText) development of audio and video filters, such as watermark, double-speed playback, etc.

libavtutil
contains some public utility function libraries, including arithmetic operations and character operations;

libswreasmple

Original audio format transcoding


libswscale

(Original video format conversion) Used for video scene scaling, color mapping conversion; image color space or format conversion, such as conversion between rgb565, rgb888, etc. and yuv420, etc.

libpostproc+libavcodec
video playback process
 

Java layer

Here is a brief introduction, see the source code, PlayActivity, Player and other classes for details.

Datasource: playback source

TinaPlayer, controls the start, stop and other states of the video.
SurfaceView/TexureView: Used for video display, providing Surface to Player

Native layer

Initialization
First get the video source from the Java layer and pass it to the TinaFFmpeg class for video processing
 

extern "C"
JNIEXPORT void JNICALL
Java_tina_com_player_TinaPlayer_native_1prepare(JNIEnv *env, jobject instance,
                                                jstring dataSource_) {
    const char *dataSource = env->GetStringUTFChars(dataSource_, 0);
    callHelper = new JavaCallHelper(javaVM, env, instance);
    ffmpeg = new TinaFFmpeg(callHelper, dataSource);
    ffmpeg->setRenderFrameCallback(render);
    ffmpeg->prepare();
    env->ReleaseStringUTFChars(dataSource_, dataSource);
}

//通过构造方法 保存到 TinaFFmpeg
TinaFFmpeg::TinaFFmpeg(JavaCallHelper *callHelper, const char *dataSource) {
    //需要内存拷贝,否则会造成悬空指针
//    this->dataSource = const_cast<char *>(dataSource);
    this->callHelper = callHelper;
    //strlen 获得字符串的长度,不包括\0
    this->dataSource = new char[strlen(dataSource) + 1];
    stpcpy(this->dataSource, dataSource);
}

TinaFFmpeg is a class that handles the decapsulation, decoding, rendering, audio and video synchronization of the entire video, and encapsulates object-oriented:

class TinaFFmpeg {
public:
    TinaFFmpeg(JavaCallHelper *javaCallHelper, const char *dataSource);
    ~TinaFFmpeg();
    void prepare();
    void _prepare();
    void start();
    void _start();
    void setRenderFrameCallback(RenderFrameCallback callback);
    void stop();
public:
    char *dataSource;
    pthread_t pid;
    pthread_t pid_play;
    AVFormatContext *formatContext = 0;
    JavaCallHelper *callHelper;
    AudioChannel *audioChannel = 0;//指针初始化最好赋值为null
    VideoChannel *videoChannel = 0;
    bool isPlaying;
    RenderFrameCallback callback;
    pthread_t pid_stop;
};

Decoding process

The TinaFFmpeg decoding process calls the functions in FFmpeg, and av_register_all() is no longer called in the new version.

Decapsulation (prepare)

prepare: decapsulate, separate the Video and Audio information in the video, and start thread processing:

void TinaFFmpeg::prepare() {
    //创建线程,task_prepare作为处理线程的函数,this为函数的参数
    pthread_create(&pid, 0, task_prepare, this);
}

//调用真正的处理函数_prepare中
void *task_prepare(void *arges) {
    TinaFFmpeg *ffmpeg = static_cast<TinaFFmpeg *>(arges);
    ffmpeg->_prepare();
    return 0;
}

Call _prepare, create AVFormatContext *formatContext = 0 in TinaFFmpeg.h; get Vedio, Audio from it. The method to obtain is avformat_open_input(&formatContext, dataSource, 0, &options); This method is time-consuming and may fail, so you need to call back the error information to Java, and you need to call JavaCallHelper to launch and call the Java method.
 

void TinaFFmpeg::_prepare() {

    //初始化网络 让ffmpeg 能够
    avformat_network_init();

    //1. 打开播放的媒体地址(文件、直播地址)
    //AVFormatContext 包含了视频的信息(宽、高)
    //文件路径不对、手机没网
    //第三个参数:指示我们打开媒体的格式(传NUll, ffmpeg会推导是MP4还是flv)
    //第四个参数:
    AVDictionary *options = 0;
    //设置超时时间  微秒
    av_dict_set(&options, "timeout", "5000000", 0);
    //耗时操作
    int ret = avformat_open_input(&formatContext, dataSource, 0, &options);

    av_dict_free(&options);

    //ret 不为0表示 打开媒体失败
    if (ret != 0) {
        LOGE("打开媒体失败:%s", av_err2str(ret));
        callHelper->onError(THREAD_CHILD, FFMPEG_CAN_NOT_OPEN_URL);
        return;
    }

    //2. 查找音视频中的流
    ret = avformat_find_stream_info(formatContext, 0);
    if (ret < 0) {
        LOGE("查找流失败:%s", av_err2str(ret));
        callHelper->onError(THREAD_CHILD, FFMPEG_CAN_NOT_FIND_STREAMS);
        return;
    }
。。。。。。。。。。。。
  
}

JavaCallHelper, specially handles Native layer reflection calling Java layer methods, here only handles the onError and onPrepare methods in the incoming object instance (TinaPlayer) in Java. JNIEnv *env deals with Java and Native in the same thread, and different threads can be obtained through JavaVM *vm. The passed instance object needs to create a global reference through env->NewGlobalRef(instace).
 

JavaCallHelper::JavaCallHelper(JavaVM *vm, JNIEnv *env, jobject instace) {
    this->vm = vm;
    //如果在主线程 回调
    this->env = env;
    // 一旦涉及到jobject 跨方法 跨线程 就需要创建全局引用
    this->instance = env->NewGlobalRef(instace);
    jclass clazz = env->GetObjectClass(instace);
    onErrorId = env->GetMethodID(clazz, "onError", "(I)V");
    onPrepareId = env->GetMethodID(clazz, "onPrepare", "()V");
}

JavaCallHelper::~JavaCallHelper() {
    env->DeleteGlobalRef(instance);
}

void JavaCallHelper::onError(int thread, int error) {
    //主线程
    if (thread == THREAD_MAIN) {
        env->CallVoidMethod(instance, onErrorId, error);
    } else {
        //子线程
        JNIEnv *env;
        //获得属于我这一个线程的jnienv
        vm->AttachCurrentThread(&env, 0);
        env->CallVoidMethod(instance, onErrorId, error);
        vm->DetachCurrentThread();
    }
}
void JavaCallHelper::onPrepare(int thread) {
   。。。
}

Decode audio and video

Obtain the corresponding video and audio streams through AVFormatContext *formatContext, obtain the decoder AVCodec, and at the same time pass the decoder context to the corresponding VideoChannel and AudioChannel to handle the corresponding decoding work

void TinaFFmpeg::_prepare() {
    //耗时操作
    int ret = avformat_open_input(&formatContext, dataSource, 0, &options);
    av_dict_free(&options);
	。。。
    //2. 查找音视频中的流
    ret = avformat_find_stream_info(formatContext, 0);
    if (ret < 0) {
        LOGE("查找流失败:%s", av_err2str(ret));
        callHelper->onError(THREAD_CHILD, FFMPEG_CAN_NOT_FIND_STREAMS);
        return;
    }
    //nb_streams; 几个流(几段视频/音频)
    for (int i = 0; i < formatContext->nb_streams; ++i) {
        //可能代表是一个视频,也可以代表是一个音频
        AVStream *stream = formatContext->streams[i];
        //包含 解码这段流的工种参数信息
        AVCodecParameters *codecpar = stream->codecpar;
        //无论音频、视频,需要做的事情(获得解码器)
        AVCodec *dec = avcodec_find_decoder(codecpar->codec_id);
        if (dec == NULL) {
            LOGE("查找解码器失败:%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_FIND_DECODER_FAIL);
            return;
        }
        //获得解码器上下文
        AVCodecContext *context = avcodec_alloc_context3(dec);
        if (context == NULL) {
            LOGE("创建解码上下文失败:%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_ALLOC_CODEC_CONTEXT_FAIL);
            return;
        }
        //3. 设置上下文内的一些参数
        ret = avcodec_parameters_to_context(context, codecpar);
        if (ret < 0) {
            LOGE("设置解码上下文参数失败:%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_OPEN_DECODER_FAIL);
            return;
        }
        //4. 打开解码器
        ret = avcodec_open2(context, dec, 0);
        if (ret != 0) {
            LOGE("打开解码器失败:%s", av_err2str(ret));
            callHelper->onError(THREAD_CHILD, FFMPEG_ALLOC_CODEC_CONTEXT_FAIL);
            return;
        }
        //单位
        AVRational time_base = stream->time_base;
        //音频
        if (codecpar->codec_type == AVMEDIA_TYPE_AUDIO) {//0
            audioChannel = new AudioChannel(i, context, time_base);
        } else if (codecpar->codec_type == AVMEDIA_TYPE_VIDEO) {//视频
            //帧率:单位时间内,需要显示多少个图片
            AVRational frame_rate = stream->avg_frame_rate;
            int fps = av_q2d(frame_rate);
            videoChannel = new VideoChannel(i, context, time_base, fps);
            videoChannel->setRenderFrameCallback(callback);
        }
    }
    if (!audioChannel && !videoChannel) {
        LOGE("没有音视频");
        callHelper->onError(THREAD_CHILD, FFMPEG_NOMEDIA);
        return;
    }
    LOGE("native prepare流程准备完毕");
    // 准备完了 通知java 你随时可以开始播放
    callHelper->onPrepare(THREAD_CHILD);
}

The next part enters the video, audio decoding, audio and video synchronization processing and other processes, and attaches the source code address.

Original  FFmpeg video playback (uncapsulation) - Nuggets

★The business card at the end of the article can receive audio and video development learning materials for free, including (FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, srs) and audio and video learning roadmaps, etc.

see below!

 

Guess you like

Origin blog.csdn.net/yinshipin007/article/details/130714097