Android平台下的FFmpeg的学习之路------(五)音视频同步+最简单的视频播放器

此系列文章将记录我学习FFmpeg的过程。


首先我们要新建一个项目,然后按照 《Android平台下的FFmpeg的学习之路------(二)环境搭建》 ,这篇文章的知识搭建好环境。


大概流程是:获取媒体文件路径 -> 把媒体文件路径传递到NDK层 -> NDK层通过FFmpeg打开媒体文件 -> FFmpeg获取媒体文件的信息 -> FFmpeg通过媒体文件信息获得视频流和音频流 -> FFmpeg通过视频流和音频流获取所需要的视频解码器和音频解码器的信息 -> FFmpeg通过视频解码器的信息和音频解码器的信息分别在FFmpeg中获取对应的视频解码器和音频解码器 -> 分别打开视频解码器和音频解码器 ->启动子线程1:读取媒体文件的视频数据和音频数据分别存放到缓存队列(音视频数据都是未解码的数据)-> 启动子线程2:从视频缓存队列获取数据并解码绘制 -> 启动子线程3:从音频缓存队列获取数据并解码播放-> 音视频同步

关于音视频同步,本人是参考了这篇文章FFmpeg学习6:视音频同步

一般来说,音视频同步有3种方案:

1.将视频同步到音频上:就是以音频的播放速度为基准来同步视频。
2.将音频同步到视频上:就是以视频的播放速度为基准来同步音频。
3.将视频和音频同步外部的时钟上,选择一个外部时钟为基准,视频和音频的播放速度都以该时钟为标准。

而音视频同步需要使用到DTS(解码时间戳)和PTS(显示时间戳)。

DTS(解码时间戳):告诉我们当前帧的解码时间(以time_base[时间基]为单位)。

PTS(显示时间戳):告诉我们当前帧的显示时间(以time_base[时间基]为单位)。

如何得到当前显示的时间呢?(单位为s[秒])

以获取视频时间为例:

1.得到时间基time_base:pFormatCtx->streams[video_stream_index]->time_base(AVRational类型)

/**
 * rational number numerator/denominator
 */
typedef struct AVRational{
    int num; ///< numerator
    int den; ///< denominator
} AVRational;

2.将时间基time_base转为double类型:

double time_base_d = av_q2d(pFormatCtx->streams[video_stream_index]->time_base);

3.获取当前视频帧PTS:

if(pPacket->pts != AV_NOPTS_VALUE){
   pFrame->pts = pPacket->pts;
} else{
   pFrame->pts = av_frame_get_best_effort_timestamp(pFrame);
}
int64_t thisFrmaePTS = pFrame->pts;

4.计算当前时间(单位秒)

int64_t time = (int64_t)(thisFrmaePTS * time_base_d);//PTS * time_base = 当前显示的时间(单位s)

生产消费模式:

生产者生产数据到缓冲区中,消费者从缓冲区中取数据。
如果缓冲区已经满了,则生产者线程阻塞;

如果缓冲区为空,那么消费者线程阻塞。

子线程1(读数据)=生产者

子线程2(获取视频数据并解码绘制)=消费者1

子线程3(获取音频数据并解码播放)=消费者2

注:多线程方面需要对POSIX有些简单的了解。

接下来我们开始写代码:

AndroidManifest.xml文件中添加权限:

    <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />

activity_main.xml布局文件:

<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context="com.jamingx.ffmpegtest.MainActivity">

    <SurfaceView
        android:id="@+id/surfaceview"
        android:layout_width="match_parent"
        android:layout_height="match_parent"/>

    <Button
        android:id="@+id/btn_play"
        android:layout_width="wrap_content"
        android:layout_height="wrap_content"
        android:text="开始" />



</android.support.constraint.ConstraintLayout>

JAVA代码:

FFmpegUtil.java:

package com.jamingx.ffmpegtest;

import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.util.Log;
import android.view.Surface;

/**
 * Created by Administrator on jamingx 2018/3/7 9:29
 */

public class FFmpegUtil {
    static {
        System.loadLibrary("ffmpegutil");
    }

    public native void init();
    public native void play(String input,Surface surface);
    public native void destroy();
    public void playing(long time){
        Log.e("TAG","time "+ time);
    }

    /**
     * 创建AudioTrack
     * @param sampleRateInHz 采样率,单位Hz
     * @param nb_channals 声道个数
     * @return AudioTrack
     */
    public AudioTrack createAudioTrack(int sampleRateInHz, int nb_channals) {
        int channaleConfig;
        if (nb_channals == 1) {
            channaleConfig = AudioFormat.CHANNEL_OUT_MONO;
        } else if (nb_channals == 2) {
            channaleConfig = AudioFormat.CHANNEL_OUT_STEREO;
        }else {
            channaleConfig = AudioFormat.CHANNEL_OUT_MONO;
        }
        int buffersize=AudioTrack.getMinBufferSize(sampleRateInHz,
                channaleConfig, AudioFormat.ENCODING_PCM_16BIT);
        AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,sampleRateInHz,channaleConfig,
                AudioFormat.ENCODING_PCM_16BIT,buffersize,AudioTrack.MODE_STREAM);
        return audioTrack;
    }
}

MainActivity.java:

package com.jamingx.ffmpegtest;

import android.graphics.PixelFormat;
import android.os.Bundle;
import android.os.Environment;
import android.support.v7.app.AppCompatActivity;
import android.view.Surface;
import android.view.SurfaceHolder;
import android.view.SurfaceView;
import android.view.View;


public class MainActivity extends AppCompatActivity {
    private SurfaceView surfaceview;
    public FFmpegUtil fFmpegUtil;
    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);
        surfaceview = (SurfaceView) findViewById(R.id.surfaceview);
        SurfaceHolder holder = surfaceview.getHolder();
        holder.setFormat(PixelFormat.RGBA_8888);//注意:设置SurfaceView显示的格式为RGBA_8888
        fFmpegUtil = new FFmpegUtil();
        fFmpegUtil.init();
        findViewById(R.id.btn_play).setOnClickListener(new View.OnClickListener() {
            @Override
            public void onClick(View v) {
                final String input7 = Environment.getExternalStorageDirectory().getAbsolutePath() + "/input7.mpg";
                final Surface surface = surfaceview.getHolder().getSurface();
                new Thread(){
                    @Override
                    public void run() {
                        fFmpegUtil.play(input7,surface);
                    }
                }.start();
            }
        });

    }

    @Override
    protected void onDestroy() {
        super.onDestroy();
        fFmpegUtil.destroy();
    }
}

MK文件:

Android.mk:

LOCAL_PATH := $(call my-dir)


# FFmpeg 库
include $(CLEAR_VARS)
LOCAL_MODULE := ffmpeg
LOCAL_SRC_FILES := $(LOCAL_PATH)/libs/$(TARGET_ARCH_ABI)/libffmpeg.so
include $(PREBUILT_SHARED_LIBRARY)


# C文件
include $(CLEAR_VARS)
LOCAL_MODULE := ffmpegutil
LOCAL_SRC_FILES := ffmpegutil.cpp com_jamingx_ffmpegtest_FFmpegUtil.cpp
LOCAL_C_INCLUDES += $(LOCAL_PATH)/include/ffmpeg
LOCAL_LDLIBS := -llog -landroid
LOCAL_SHARED_LIBRARIES := ffmpeg

include $(BUILD_SHARED_LIBRARY)


Application.mk:

APP_STL := gnustl_static
APP_CPPFLAGS := -frtti -fexceptions
APP_PLATFORM := android-15

APP_ABI :=armeabi


头文件:

com_jamingx_ffmpegtest_FFmpegUtil.h:

/* DO NOT EDIT THIS FILE - it is machine generated */
#include <jni.h>
/* Header for class com_jamingx_ffmpegtest_FFmpegTest */

#ifndef _Included_com_jamingx_livedemo_FFmpegTest
#define _Included_com_jamingx_livedemo_FFmpegTest
#ifdef __cplusplus
extern "C" {
#endif

JNIEXPORT void JNICALL Java_com_jamingx_ffmpegtest_FFmpegUtil_init
        (JNIEnv *, jobject);

JNIEXPORT void JNICALL Java_com_jamingx_ffmpegtest_FFmpegUtil_play
  (JNIEnv *, jobject,jstring,jobject);

JNIEXPORT void JNICALL Java_com_jamingx_ffmpegtest_FFmpegUtil_destroy
        (JNIEnv *, jobject);

#ifdef __cplusplus
}
#endif
#endif
//
// Created by Administrator on 2018/3/17.
//

#ifndef LIVEDEMO_FFMPEGUTIL_H
#define LIVEDEMO_FFMPEGUTIL_H

#endif //LIVEDEMO_FFMPEGUTIL_H

void initFFmpeg();//注册组件
void initFFmpegInfo(const char *path);//初始化FFmpegInfo结构体、音频解码器和视频解码器
void destroyFFmpegInfo();//free FFmpegInfo
void read();//启动读数据线程
void startDecodec();//启动音频解码线程和视频解码线程
void stopDecodec();//停止解码
void initPOSIXInfo();//初始化POSIX多线程需要的一些类型
void destroyPOSIXInfo();//free



typedef struct {
    void* (*startDecodecVideo)(void*);
    void* (*onDecodecVideo)(uint8_t*,uint32_t,uint32_t,uint32_t,int64_t);
    void* (*endDecodecVideo)(void*);
    void* (*startDecodecAudio)(int32_t,int32_t);
    void* (*onDecodecAudio)(uint8_t*,uint32_t);
    void* (*endDecodecAudio)(void*);
} DecodecListener;//解码回调接口

void setDecodecListener(DecodecListener* decodecListener);//设置监听接口

ffmpegutil.h:

//
// Created by Administrator on 2018/3/17.
//

#ifndef LIVEDEMO_FFMPEGUTIL_H
#define LIVEDEMO_FFMPEGUTIL_H

#endif //LIVEDEMO_FFMPEGUTIL_H

void initFFmpeg();//注册组件
void initFFmpegInfo(const char *path);//初始化FFmpegInfo结构体、音频解码器和视频解码器
void destroyFFmpegInfo();//free FFmpegInfo
void read();//启动读数据线程
void startDecodec();//启动音频解码线程和视频解码线程
void stopDecodec();//停止解码
void initPOSIXInfo();//初始化POSIX多线程需要的一些类型
void destroyPOSIXInfo();//free



typedef struct {
    void* (*startDecodecVideo)(void*);
    void* (*onDecodecVideo)(uint8_t*,uint32_t,uint32_t,uint32_t,int64_t);
    void* (*endDecodecVideo)(void*);
    void* (*startDecodecAudio)(int32_t,int32_t);
    void* (*onDecodecAudio)(uint8_t*,uint32_t);
    void* (*endDecodecAudio)(void*);
} DecodecListener;//解码回调接口

void setDecodecListener(DecodecListener* decodecListener);//设置监听接口


C++代码:

ffmpegutil.cpp:

//
// Created by Administrator on 2018/3/7.
//

#include <pthread.h>
#include <queue>
#include "ffmpegutil.h"
#include <android/log.h>

#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"TAG",FORMAT,##__VA_ARGS__);


#define MAX_READ_CACHE 60
#define MAX_AUDIO_FRME_SIZE 48000*4

extern "C" {
#include "libavcodec/avcodec.h"
#include "libavformat/avformat.h"
#include "libavutil/imgutils.h"
#include "libswscale/swscale.h"
#include "libswresample/swresample.h"
#include "libavutil/time.h"
}

using namespace std;


struct FFmpegInfo{
    AVFormatContext *pFormatCtx;//封装格式上下文
    AVCodecContext *pVideoCodecCtx;//视频解码器上下文
    AVCodecContext *pAudioCodecCtx;//音频解码器上下文
    AVCodec * pVideoCodec;//视频解码器
    AVCodec * pAudioCodec;//音频解码器
    int max_stream;//总共有多少流
    int video_stream_index;//视频流下标
    int audio_stream_count;//音频流个数
    int *audio_stream_indexs;//音频流下标集合
    int subtitle_stream_count;//字幕流个数
    int *subtitle_stream_indexs;//字幕流下标集合
    int64_t video_clock;//当前视频的时间
    int64_t audio_clock;//当前音频的时间
};

struct POSIXInfo{
    pthread_t read_id;//读文件的线程id
    pthread_t playVideo_id;//播放视频的线程id
    pthread_t playAudio_id;//播放音频的线程id
    pthread_mutex_t pthread_mutex;
    pthread_cond_t pthread_cond;
};



FFmpegInfo *ffmpegInfo;
queue<AVPacket*> videoAVPackets;//未解码的视频数据缓存队列
queue<AVPacket*> audioAVPackets;//未解码的音频数据缓存队列
POSIXInfo* posixInfo;
DecodecListener* pDecodecListener;//解码监听接口


bool decodec_flag = false;


/**
 * 注册组件
 */
void initFFmpeg(){
    //注册所有组件
    av_register_all();
//    avformat_network_init();
}

/**
 * 初始化结构体FFmpegInfo
 * 初始化视频解码器和音频解码器
 * @param path
 */
void initFFmpegInfo(const char *path) {
    ffmpegInfo = (FFmpegInfo *) malloc(sizeof(FFmpegInfo));
    //初始化 AVFormatContext *pFormatCtx

    ffmpegInfo->pFormatCtx = avformat_alloc_context();
    if (avformat_open_input(&ffmpegInfo->pFormatCtx, path, NULL, NULL) != 0) {
//        LOGE("打开输入文件失败");
        return;
    }

    if (avformat_find_stream_info(ffmpegInfo->pFormatCtx, NULL) < 0) {
//        LOGE("获取视频文件信息失败");
        return;
    }
    //输出视频信息
    av_dump_format(ffmpegInfo->pFormatCtx, -1, path, 0);
    ffmpegInfo->max_stream = ffmpegInfo->pFormatCtx->nb_streams > 0 ? ffmpegInfo->pFormatCtx->nb_streams : 0;

    if (ffmpegInfo->max_stream == 0) {
//        LOGE("没有流");
        return;
    }
    ffmpegInfo->audio_stream_count = 0;
    ffmpegInfo->subtitle_stream_count = 0;
    for (int i = 0; i < ffmpegInfo->max_stream; ++i) {
        if (ffmpegInfo->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO) {
            ffmpegInfo->video_stream_index = i;//获取视频流的索引(下标)位置
        } else if (ffmpegInfo->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
            ffmpegInfo->audio_stream_count++;//音频流个数+1
        } else if (ffmpegInfo->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_SUBTITLE) {
            ffmpegInfo->subtitle_stream_count++;//字幕流个数+1
        }
    }
    //获得音频流和字幕流下标集合
    AVMediaType temp[] = {AVMEDIA_TYPE_AUDIO, AVMEDIA_TYPE_SUBTITLE};
    for (int j = 0; j < 2; ++j) {
        AVMediaType type = temp[j];
        int index = 0;
        if (type == AVMEDIA_TYPE_AUDIO && ffmpegInfo->audio_stream_count > 0) {//存放音频流下标集合
            ffmpegInfo->audio_stream_indexs = (int *) malloc(sizeof(int) * ffmpegInfo->audio_stream_count);
            for (int i = 0; i < ffmpegInfo->max_stream; ++i) {
                if (ffmpegInfo->pFormatCtx->streams[i]->codec->codec_type == AVMEDIA_TYPE_AUDIO) {
                    ffmpegInfo->audio_stream_indexs[index++] = i;
                }
            }
        } else if (type == AVMEDIA_TYPE_SUBTITLE && ffmpegInfo->subtitle_stream_count > 0) {//存放字幕流下标集合
            ffmpegInfo->subtitle_stream_indexs = (int *) malloc(
                    sizeof(int) * ffmpegInfo->subtitle_stream_count);
            for (int i = 0; i < ffmpegInfo->max_stream; ++i) {
                if (ffmpegInfo->pFormatCtx->streams[i]->codec->codec_type ==
                    AVMEDIA_TYPE_SUBTITLE) {
                    ffmpegInfo->subtitle_stream_indexs[index++] = i;
                }
            }
        }
    }

    if (ffmpegInfo -> video_stream_index == -1){
//        LOGE("没有视频流");
    } else{
        ffmpegInfo->pVideoCodecCtx = ffmpegInfo->pFormatCtx->streams[ffmpegInfo->video_stream_index]->codec;
        ffmpegInfo->pVideoCodec = avcodec_find_decoder(ffmpegInfo->pVideoCodecCtx->codec_id);
        //打开编码器
        if(avcodec_open2(ffmpegInfo->pVideoCodecCtx,ffmpegInfo->pVideoCodec,NULL) < 0){
//            LOGE("打开视频编码器失败");
            return;
        }
    }

    if (ffmpegInfo -> audio_stream_count == 0){
//        LOGE("没有音频流");
    } else{
        ffmpegInfo->pAudioCodecCtx = ffmpegInfo->pFormatCtx->streams[ffmpegInfo->audio_stream_indexs[0]]->codec;//获取第一个音频流的解码器上下文
        ffmpegInfo->pAudioCodec = avcodec_find_decoder(ffmpegInfo->pAudioCodecCtx->codec_id);
        if(avcodec_open2(ffmpegInfo->pAudioCodecCtx,ffmpegInfo->pAudioCodec,NULL) < 0){
//            LOGE("打开音频编码器失败");
        }
    }

    if (ffmpegInfo -> subtitle_stream_count == 0){
//        LOGE("没有字幕流");
    }
}


void destroyFFmpegInfo(){
    //关闭解编码器
    if (ffmpegInfo->pVideoCodecCtx != NULL){
        avcodec_close(ffmpegInfo->pVideoCodecCtx);
    }
    if (ffmpegInfo->pAudioCodecCtx != NULL){
        avcodec_close(ffmpegInfo->pAudioCodecCtx);
    }
    if (&ffmpegInfo->pFormatCtx != NULL){
        //关闭输入文件
        avformat_close_input(&ffmpegInfo->pFormatCtx);
    }
    if (ffmpegInfo->audio_stream_indexs != NULL){
        free(ffmpegInfo->audio_stream_indexs);
        ffmpegInfo->audio_stream_indexs = NULL;
    }
    if (ffmpegInfo->subtitle_stream_indexs != NULL){
        free(ffmpegInfo->subtitle_stream_indexs);
        ffmpegInfo->subtitle_stream_indexs = NULL;
    }
    free(ffmpegInfo);
    ffmpegInfo = NULL;
}


void initPOSIXInfo(){
    posixInfo = (POSIXInfo*)malloc(sizeof(POSIXInfo));
    pthread_mutex_init(&posixInfo->pthread_mutex,NULL);
    pthread_cond_init(&posixInfo->pthread_cond,NULL);
}

void destroyPOSIXInfo(){
    pthread_mutex_destroy(&posixInfo->pthread_mutex);
    pthread_cond_destroy(&posixInfo->pthread_cond);
    free(posixInfo);
    posixInfo = NULL;
}

/**
 * 读数据
 * @param arg
 * @return
 */
void* run_read(void* arg){
    LOGE(" read 线程 running");
    AVPacket *pPacket = av_packet_alloc();
//    av_seek_frame(ffmpegInfo->pFormatCtx,ffmpegInfo->video_stream_index,5000,NULL);
//    av_seek_frame(ffmpegInfo->pFormatCtx,ffmpegInfo->audio_stream_indexs[0],5000,NULL);
    while (av_read_frame(ffmpegInfo->pFormatCtx,pPacket) == 0){
        LOGE("读ing");
        //读取
        pthread_mutex_lock(&posixInfo->pthread_mutex);
        AVPacket *temp = NULL;
        if (pPacket->stream_index == ffmpegInfo->video_stream_index){
            temp = av_packet_alloc();
            av_copy_packet(temp,pPacket);
            videoAVPackets.push(temp);
            LOGE("视频缓冲区大小:%d",videoAVPackets.size());
        } else
        if(pPacket->stream_index == ffmpegInfo->audio_stream_indexs[0]){
            temp = av_packet_alloc();
            av_copy_packet(temp,pPacket);
            audioAVPackets.push(temp);
            LOGE("音频缓冲区大小:%d",videoAVPackets.size());
        }
        while(videoAVPackets.size() >= MAX_READ_CACHE  || audioAVPackets.size() >= MAX_READ_CACHE){
            pthread_cond_wait(&posixInfo->pthread_cond,&posixInfo->pthread_mutex);
        }
        pthread_cond_broadcast(&posixInfo->pthread_cond);
        pthread_mutex_unlock(&posixInfo->pthread_mutex);

    }
    av_free(pPacket);
    return NULL;
}

/**
 * 启动读数据的线程
 */
void read(){
    pthread_create(&posixInfo->read_id,NULL,run_read,NULL);
}

/**
 * 获取视频数据并解码
 * @return
 */
void* run_video_decodec(void *){
    int got_picture_ptr;//如果没有帧可以解压缩,则为零,否则为非零。
    int countFrame = 0;
    int64_t start_PTS = 0;
    //初始化 AVFrame *pFrame -> 存放解码后的数据
    AVFrame *pFrame = av_frame_alloc();
    //AVFrame *pFrameRGBA -> 存放转换为RGBA后的数据
    AVFrame *pFrameRGBA = av_frame_alloc();
    int w = ffmpegInfo->pVideoCodecCtx->width;
    int h = ffmpegInfo->pVideoCodecCtx->height;
    size_t oneFrameSize = (size_t)av_image_get_buffer_size(AV_PIX_FMT_RGBA,w,h,1);
    uint8_t * buff = (uint8_t*)malloc(oneFrameSize);

    //初始化用于格式转换的SwsContext,由于解码出来的帧格式不是RGBA的,在渲染之前需要进行格式转换
    SwsContext *sws_ctx = sws_getContext(w, h, ffmpegInfo->pVideoCodecCtx->pix_fmt,
                                         w, h, AV_PIX_FMT_RGBA,
                                         SWS_BILINEAR, NULL,
                                         NULL, NULL);
    if (pDecodecListener != NULL &&  pDecodecListener->startDecodecVideo != NULL){
        pDecodecListener->startDecodecVideo(NULL);
    }
//    int64_t startTime = av_gettime();//开始播放的时间
    double time_base_d = av_q2d(ffmpegInfo->pFormatCtx->streams[ffmpegInfo->video_stream_index]->time_base);
//    int frameInterval = (int)(1/(time_base_d * av_q2d(ffmpegInfo->pFormatCtx->streams[ffmpegInfo->video_stream_index]->avg_frame_rate)));
//    int frameRate = (int)(av_q2d(ffmpegInfo->pFormatCtx->streams[ffmpegInfo->video_stream_index]->avg_frame_rate));//帧率
    while (decodec_flag){
        LOGE("视频解码ing,%d",videoAVPackets.size());
        pthread_mutex_lock(&posixInfo->pthread_mutex);
        while (videoAVPackets.size() <= 0){
            pthread_cond_wait(&posixInfo->pthread_cond,&posixInfo->pthread_mutex);
        }

        AVPacket *pPacket = videoAVPackets.front();
        videoAVPackets.pop();

        pthread_cond_broadcast(&posixInfo->pthread_cond);
        pthread_mutex_unlock(&posixInfo->pthread_mutex);

        if (avcodec_decode_video2(ffmpegInfo->pVideoCodecCtx,pFrame,&got_picture_ptr,pPacket) < 0){
//            LOGE("解码错误");
        }
        if (got_picture_ptr >= 0){

            if(pPacket->pts != AV_NOPTS_VALUE){
                pFrame->pts = pPacket->pts;
            } else{
                pFrame->pts = av_frame_get_best_effort_timestamp(pFrame);
            }
//            int64_t thisFrmaePTS = start_PTS + frameInterval * countFrame;//计算当前帧的PTS
            int64_t thisFrmaePTS = pFrame->pts;
            int64_t time = (int64_t)(thisFrmaePTS * time_base_d);//PTS * time_base = 当前显示的时间(单位s)
            ffmpegInfo->video_clock =(int64_t) ((thisFrmaePTS * time_base_d)* 1000000) ; //PTS * time_base = 当前显示的时间(单位s),*1000000转换为微秒
//            LOGE("视频PTS:%lld,视频DTS:%lld,duration:%lld,pos:%lld",pFrame->pts,pPacket->dts,pPacket->duration,pPacket->pos);
            countFrame++;
            av_image_fill_arrays(pFrameRGBA->data, pFrameRGBA->linesize, (uint8_t*)buff, AV_PIX_FMT_RGBA,
                                     w, h, 1);
            //格式转换
            sws_scale(sws_ctx,(const uint8_t *const*)pFrame->data,
                      pFrame->linesize,0,h,
                      pFrameRGBA->data,pFrameRGBA->linesize);
            if (pDecodecListener != NULL &&  pDecodecListener->onDecodecVideo != NULL){
                pDecodecListener->onDecodecVideo(buff,oneFrameSize,w,h,time);//调用回调接口进行绘制
            }

            //视频同步到音频

            //视频比音频快
            if (ffmpegInfo->video_clock > ffmpegInfo->audio_clock){
                av_usleep((uint)(ffmpegInfo->video_clock - ffmpegInfo->audio_clock));//延迟
            } else{
                //音频比视频快
                //不做延迟
            }

//            //计算标准视频时间(音频同步到视频需要)
//            int64_t showTime = (int64_t)(startTime + ffmpegInfo->video_clock);
//            int64_t thisTime = av_gettime();//当前时间
//            if (thisTime < showTime){ //显示快了,需要放慢速度
////                LOGE("显示快了");
//                av_usleep((uint)(showTime - thisTime));
//            } else if(thisTime > showTime){//显示慢了
//            
//            } else{//刚好显示时间正确,一般不会走到这里
//
//            }
        }
        av_packet_unref(pPacket);
        av_free(pPacket);
    }
    free(buff);
    buff = NULL;
    av_free(pFrame);
    av_free(pFrameRGBA);
    sws_freeContext(sws_ctx);

    if (pDecodecListener != NULL &&  pDecodecListener->endDecodecVideo != NULL){
        pDecodecListener->endDecodecVideo(NULL);
    }

    return NULL;
}



/**
 * 获取音频数据并解码
 * @param arg
 * @return
 */
void* run_audio_decodec(void* arg){
    //初始化 AVFrame *pFrame -> 存放解码后的数据
    AVFrame *pFrame = av_frame_alloc();
    //初始化用于重采样的SwrContext
    //分配重采样SwrContext
    SwrContext *swrCtx = swr_alloc();
    //输入的采样格式
    enum AVSampleFormat in_sample_fmt = ffmpegInfo->pAudioCodecCtx->sample_fmt;
    //输出采样格式16bit PCM
    enum AVSampleFormat out_sample_fmt = AV_SAMPLE_FMT_S16;
    //输入采样率
    int in_sample_rate = ffmpegInfo->pAudioCodecCtx->sample_rate;
    //输出采样率
    int out_sample_rate = 44100;
    //获取输入的声道布局
    uint64_t in_ch_layout = ffmpegInfo->pAudioCodecCtx->channel_layout;
    //输出的声道布局(立体声)
    uint64_t out_ch_layout = AV_CH_LAYOUT_STEREO;
    //设置参数到SwrContext
    swr_alloc_set_opts(swrCtx,
                       out_ch_layout, out_sample_fmt, out_sample_rate,
                       in_ch_layout, in_sample_fmt, in_sample_rate,
                       0, NULL);
    //初始化SwrContext
    swr_init(swrCtx);
    //输出的声道个数
    int out_channel_nb = av_get_channel_layout_nb_channels(out_ch_layout);
    //分配存放 16bit 44100 PCM 数据 的内存
    uint8_t *out_buffer = (uint8_t *) av_malloc(MAX_AUDIO_FRME_SIZE);

    if (pDecodecListener != NULL &&  pDecodecListener->startDecodecAudio != NULL){
        pDecodecListener->startDecodecAudio(out_sample_rate,out_channel_nb);
    }


    int64_t startTime = av_gettime();//开始播放的时间
    double time_base_d = av_q2d(ffmpegInfo->pFormatCtx->streams[ffmpegInfo->audio_stream_indexs[0]]->time_base);

    int got_frame = 0;
    int countFrame = 0;
    while (decodec_flag){
        LOGE("音频解码ing,%d",audioAVPackets.size());
        pthread_mutex_lock(&posixInfo->pthread_mutex);

        while (audioAVPackets.size() <= 0){
            pthread_cond_wait(&posixInfo->pthread_cond,&posixInfo->pthread_mutex);
        }

        AVPacket *pPacket = audioAVPackets.front();
        audioAVPackets.pop();

        pthread_cond_broadcast(&posixInfo->pthread_cond);
        pthread_mutex_unlock(&posixInfo->pthread_mutex);

        //七.解码一帧数据
        avcodec_decode_audio4(ffmpegInfo->pAudioCodecCtx, pFrame, &got_frame, pPacket);
        if (got_frame > 0) {
            if (pPacket->pts != AV_NOPTS_VALUE){
                pFrame->pts = pPacket->pts;
            } else{
                AVRational tb = (AVRational){1, pFrame->sample_rate};
                if (pFrame->pts != AV_NOPTS_VALUE)
                    pFrame->pts = av_rescale_q(pFrame->pts, ffmpegInfo->pAudioCodecCtx->time_base, tb);
                else if (pFrame->pkt_pts != AV_NOPTS_VALUE)
                    pFrame->pts = av_rescale_q(pFrame->pkt_pts, av_codec_get_pkt_timebase(ffmpegInfo->pAudioCodecCtx), tb);
                else
                    pFrame->pts = pPacket->dts;
            }

            int64_t thisFrmaePTS = pFrame->pts;
            ffmpegInfo->audio_clock = (int64_t)(thisFrmaePTS * time_base_d * 1000000);//PTS * time_base = 当前显示的时间(单位s)  *1000000 转化为 微秒

            //AudioTrack是有阻塞的所以可以不需要延迟,因为它按正常速度执行就是标准的音频时间
            bool sleepflag = false;
            if (sleepflag){
                //计算音频标准时间(视频同步到音频需要)
                int64_t showTime = (int64_t)(startTime + ffmpegInfo->audio_clock);
                int64_t thisTime = av_gettime();//当前时间
                if (thisTime < showTime){ //显示快了,需要放慢速度
                    av_usleep((uint)(showTime - thisTime));
                } else if(thisTime > showTime){//显示慢了,需要加快速度

                } else{//刚好显示时间正确,一般不会走到这里

                }
            }

            //重采样
            swr_convert(swrCtx, &out_buffer, MAX_AUDIO_FRME_SIZE,(const uint8_t **) pFrame->data , pFrame->nb_samples);
            //获取samples(样本,类似于视频的一帧)的大小
            int out_buffer_size = av_samples_get_buffer_size(NULL, out_channel_nb,
                                                             pFrame->nb_samples, out_sample_fmt, 1);
//            LOGE("音频PTS:%lld,音频DTS:%lld,duration:%lld,pos:%lld",pPacket->pts,pPacket->dts,pPacket->duration,pPacket->pos);

            if (pDecodecListener != NULL &&  pDecodecListener->onDecodecAudio != NULL){
                pDecodecListener->onDecodecAudio(out_buffer,out_buffer_size);//调用回调接口进行播放音频
            }

        }
        av_packet_unref(pPacket);
        av_free(pPacket);
    }

    av_free(pFrame);
    av_free(out_buffer);
    swr_free(&swrCtx);

    if (pDecodecListener != NULL &&  pDecodecListener->endDecodecAudio != NULL){
        pDecodecListener->endDecodecAudio(NULL);
    }
//    LOGE("音频解码结束");
    return NULL;
}

void startDecodec(){
    LOGE("startDecodec 开始解码")
    decodec_flag = true;
    pthread_create(&posixInfo->playVideo_id, NULL, run_video_decodec, NULL);
    pthread_create(&posixInfo->playAudio_id,NULL,run_audio_decodec,NULL);


    pthread_join(posixInfo->read_id,NULL);
    pthread_join(posixInfo->playVideo_id,NULL);
    pthread_join(posixInfo->playAudio_id,NULL);
}

void stopDecodec(){
    decodec_flag = false;
}


void setDecodecListener(DecodecListener* decodecListener){
    pDecodecListener = decodecListener;
}


com_jamingx_ffmpegtest_FFmpegUtil.cpp:

#include "com_jamingx_ffmpegtest_FFmpegUtil.h"
#include <android/native_window_jni.h>
#include <android/native_window.h>
#include <android/log.h>
#include "ffmpegutil.h"
#include <stdlib.h>
#include <string.h>
#define LOGE(FORMAT,...) __android_log_print(ANDROID_LOG_ERROR,"TAG",FORMAT,##__VA_ARGS__);

typedef struct AudioTrack{
    jobject audiotrack;
    jclass audio_track_class;
    jmethodID play_id;
    jmethodID stop_id;
    jmethodID write_id;
} AudioTrack;

typedef struct JavaCallBack{
    JavaVM *javaVM;
    jobject thiz;
    jclass thizClass;
    jmethodID playing_id;
    jmethodID createAudioTrack_id;
} JavaCallBack;


JavaCallBack *javaCallBack;
ANativeWindow* nativeWindow;
AudioTrack* audioTrack;
DecodecListener* listener;



void java_callback_init(JNIEnv *env, jobject thiz){
    env->GetJavaVM(&javaCallBack ->javaVM);
    jclass clas = env -> GetObjectClass(thiz);

//    javaCallBack -> thiz = thiz;
    javaCallBack -> thiz = env->NewGlobalRef(thiz);
    javaCallBack -> thizClass = clas;
//    javaCallBack -> thizClass =
    javaCallBack -> playing_id = env->GetMethodID(clas,"playing","(J)V");
    javaCallBack -> createAudioTrack_id = env->GetMethodID(clas,"createAudioTrack","(II)Landroid/media/AudioTrack;");
}

/**
 * 在Java层创建AudioTrack
 * @param env
 * @param playerUtil
 * @param out_sample_rate
 * @param out_channel_nb
 */
void createAudioTrack(int32_t out_sample_rate,int32_t out_channel_nb){
    audioTrack = (AudioTrack*)malloc(sizeof(AudioTrack));
    //AudioTrack对象
    JNIEnv *env;
    (javaCallBack->javaVM)->AttachCurrentThread(&env,NULL);
    //调用Java层的createAudioTrack
    jobject at = env->CallObjectMethod(javaCallBack->thiz,javaCallBack->createAudioTrack_id,out_sample_rate, out_channel_nb);
    audioTrack->audiotrack = env->NewGlobalRef(at);
    //获得AudioTrack的class
    audioTrack->audio_track_class = env->GetObjectClass(audioTrack->audiotrack);
    //AudioTrack.play
    audioTrack->play_id = env->GetMethodID(audioTrack->audio_track_class, "play", "()V");
    //AudioTrack.stop
    audioTrack->stop_id = env->GetMethodID(audioTrack->audio_track_class, "stop", "()V");
    //AudioTrack.write
    audioTrack->write_id = env->GetMethodID(audioTrack->audio_track_class, "write","([BII)I");

    javaCallBack->javaVM->DetachCurrentThread();
}

void destroyAudioTrack(){
    JNIEnv *env;
    (javaCallBack->javaVM)->AttachCurrentThread(&env,NULL);
    env->DeleteGlobalRef(audioTrack->audiotrack);
    javaCallBack->javaVM->DetachCurrentThread();
    free(audioTrack);
}

/**
 * 调用Java层的audioTrack.play()
 * @param env
 */
void audioTrackPlay(){
    if (audioTrack != NULL){
        JNIEnv *env;
        javaCallBack->javaVM->AttachCurrentThread(&env,NULL);
        //调用audioTrack.play()
        env->CallVoidMethod(audioTrack->audiotrack, audioTrack->play_id);
        javaCallBack->javaVM->DetachCurrentThread();
    }
}

/**
 * 调用Java层的audioTrack.stop()
 * @param env
 */
void audioTrackStop(){
    if (audioTrack != NULL){
        JNIEnv *env;
        javaCallBack->javaVM->AttachCurrentThread(&env,NULL);
        //调用audioTrack.stop()
        env->CallVoidMethod(audioTrack->audiotrack, audioTrack->stop_id);
        javaCallBack->javaVM->DetachCurrentThread();
    }
}

/**
 * 调用Java层的audioTrack.write()
 * @param env
 * @param out_buffer
 * @param out_buffer_size
 */
void audioTrackWrite(uint8_t* out_buffer,int32_t out_buffer_size){
    if (audioTrack != NULL){
        JNIEnv *env;
        javaCallBack->javaVM->AttachCurrentThread(&env,NULL);
        //out_buffer缓冲区数据 -> Java的byte[]
        jbyteArray audio_sample_array = env->NewByteArray(out_buffer_size);
        jbyte *sample_bytep = env->GetByteArrayElements(audio_sample_array, NULL);
        //out_buffer的数据复制到sampe_bytep
        memcpy(sample_bytep, out_buffer, out_buffer_size);
        env->ReleaseByteArrayElements(audio_sample_array, sample_bytep, 0);
        //调用audioTrack.write()
        env->CallIntMethod(audioTrack->audiotrack, audioTrack->write_id,
                           audio_sample_array, 0, out_buffer_size);
        //释放局部引用
        env->DeleteLocalRef(audio_sample_array);
        javaCallBack->javaVM->DetachCurrentThread();
    }
}

/**
 * 调用java层playing(long time)方法
 * @param time
 */
void callTime(int64_t time){
    JNIEnv *env;
    javaCallBack->javaVM->AttachCurrentThread(&env,NULL);
    env->CallVoidMethod(javaCallBack->thiz,javaCallBack->playing_id,time);
    javaCallBack->javaVM->DetachCurrentThread();
}

/**
 * 回调函数-开始解码视频
 * @return
 */
void* startDecodecVideo(void*){

    return NULL;
}

/**
 * 回调函数-解码一帧视频数据
 * @param buf 一帧数据RGBA
 * @param size buf的长度
 * @param w 宽度
 * @param h 高度
 * @param time 当前帧的时间,单位s
 * @return
 */
void* onDecodecVideo(uint8_t* buf,uint32_t size,uint32_t w,uint32_t h,int64_t time){
    ANativeWindow_Buffer outBuffer;
    ANativeWindow_setBuffersGeometry(nativeWindow, w, h,WINDOW_FORMAT_RGBA_8888);
    ANativeWindow_lock(nativeWindow,&outBuffer,NULL);
    memcpy(outBuffer.bits,buf,size);
    ANativeWindow_unlockAndPost(nativeWindow);
    callTime(time);
    return NULL;
}

/**
 * 回调函数-视频解码结束
 * @return
 */
void* endDecodecVideo(void*){

    return NULL;
}

/**
 * 回调函数-开始解码音频
 * @param out_sample_rate 采样率
 * @param out_channel_nb 声道个数
 * @return
 */
void* startDecodecAudio(int32_t out_sample_rate,int32_t out_channel_nb){
    createAudioTrack(out_sample_rate,out_channel_nb);//初始化AudioTrack
    audioTrackPlay();//启动AudioTrack
    return NULL;
}

/**
 * 回调函数-解码一帧音频数据
 * @param buf 解码后的数据
 * @param size buf长度
 * @return
 */
void* onDecodecAudio(uint8_t* buf,uint32_t size){
    //调用Java层的audioTrack.write()
    audioTrackWrite(buf,size);
    return NULL;
}

/**
 * 回调函数-音频解码结束
 * @return
 */
void* endDecodecAudio(void*){
    audioTrackStop();
    destroyAudioTrack();
    return NULL;
}



JNIEXPORT void JNICALL Java_com_jamingx_ffmpegtest_FFmpegUtil_init
        (JNIEnv *env, jobject thiz){
    javaCallBack = (JavaCallBack*)malloc(sizeof(JavaCallBack));
    java_callback_init(env,thiz);
    initFFmpeg();
    initPOSIXInfo();
}

JNIEXPORT void JNICALL Java_com_jamingx_ffmpegtest_FFmpegUtil_play
        (JNIEnv *env, jobject thiz,jstring jstr_path,jobject surface){
    const char* input_path = env->GetStringUTFChars(jstr_path,NULL);// java String -> C char*
    initFFmpegInfo(input_path);
    nativeWindow = ANativeWindow_fromSurface(env,surface);
    listener = (DecodecListener*)malloc(sizeof(DecodecListener));
    listener ->startDecodecVideo = startDecodecVideo;
    listener ->onDecodecVideo = onDecodecVideo;
    listener ->endDecodecVideo = endDecodecVideo;
    listener ->startDecodecAudio = startDecodecAudio;
    listener ->onDecodecAudio = onDecodecAudio;
    listener ->endDecodecAudio = endDecodecAudio;
    read();
    setDecodecListener(listener);
    startDecodec();
}

JNIEXPORT void JNICALL Java_com_jamingx_ffmpegtest_FFmpegUtil_destroy
        (JNIEnv *env, jobject thiz){
    stopDecodec();
    env->DeleteGlobalRef(javaCallBack -> thiz);
    free(javaCallBack);
    javaCallBack = NULL;
    destroyFFmpegInfo();
    destroyPOSIXInfo();
}

至此,代码就写完了。

主要的代码都有一些注释的说明。

音视频解码绘制播放在前面的文章有教程:

Android平台下的FFmpeg的学习之路------(三)视频解码+NDK绘制

Android平台下的FFmpeg的学习之路------(四)音频解码+AudioTrack播放

此项目已上传到CSDN:https://download.csdn.net/download/jamingx666/10397800

最后说明一下:本人对FFmpeg的了解也不是特别多,如有BUG,请各位大神多多指出。


猜你喜欢

转载自blog.csdn.net/Jamingx666/article/details/80224471