Android FFMpeg应用实例(三):利用AVfilter实现视频添加水印,滤镜等特效功能(附Demo)

最近因为工作忙,和世界杯期间对球赛比较痴迷,一个多月没写博客了。法国队赢得世界冠军,克罗地亚赢得世人的尊敬。个人奖项方面,皇马中场莫德里奇获得金球奖,实至名归。以前不懂球,只知道前锋有多牛逼,能进球。殊不知要打赢一场球需要后卫的防守和中场的策划进攻,就好比如我们做项目一样,能搭建一个可拓展性强,代码冗余度低的框架是多么重要,有利于项目的后期维护和扩展。扯远了,哈哈。回归今天的主题,本篇博文将向大家介绍如何在Android平台上利用FFmpeg给视频文件添加水印和滤镜。本人现从事音视频开发,对自定义相机和FFmpeg,OpenGL等方面有所了解,喜欢可以关注我的博客哦,后期会不断更新自定义相机,FFMpeg和OpenGL的相关博文。当然了,如遇到写得不好的地方,也请大家给我留言,我会不断改进的,谢谢。

在前面的几篇文章中我们已经学会了用FFMpeg对音视频进行编解码,下面我们主要来介绍一下libavfilter

FFMpeg的libavfilter是为音视频添加特效功能的,其关键函数如下所示:

avfilter_register_all():注册所有AVFilter
avfilter_graph_alloc():为FilterGraph分配内存
avfilter_graph_create_filter():创建并向FilterGraph中添加一个Filter
avfilter_graph_parse_ptr():将一串通过字符串描述的Graph添加到FilterGraph中。
avfilter_graph_config():检查FilterGraph的配置
av_buffersrc_add_frame():向FilterGraph中加入一个AVFrame.
av_buffersink_get_frame():从FilterGraph中取出一个AVFrame.

今天的案例程序中提供了如下几种特效

1.黑白特效的配置如下
const char *filters_descr = "lutyuv='u=128:v=128'"; 
//const char *filters_descr = "hflip"; 
//const char *filters_descr = "hue='h=60:s=-3'"; 
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h";
//const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:[email protected]"; 
//const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]"; 
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:[email protected]";
2.添加水印使用配置如下
const char *filters_descr = "lutyuv='u=128:v=128'"; //const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";

更多的特效使用,请参考官网学习,FFmpeg官网特效

下面我们来看看代码实现

1.在Activity中初始化一个SurfaceView,并定义一个natvie函数用于把SurfaceView的Surface传递给底层(底层函数调用FFMpeg库的方法对视频数据进行处理后将Surface传给上层显示)

SurfaceView surfaceView = (SurfaceView) findViewById(R.id.surface_view);
surfaceHolder = surfaceView.getHolder();
surfaceHolder.addCallback(this);
...
public native int play(Object surface);

注:如果没了解如何在Android Studio中集成FFmpeg,可以参考Android Studio通过JNI(CMake方式)集成FFMpeg音视频处理框架

2.在surfaceCreate()方法中调用play方法

@Override
    public void surfaceCreated(SurfaceHolder holder) {
        new Thread(new Runnable() {
            @Override
            public void run() {
                play(surfaceHolder.getSurface());
            }
        }).start();
    }

那么重点就是JNI层的play()函数做了什么?首先我们在上一篇play()函数的基础上添加libavfilter各种特效需要的头文件

//added for AVfilter start
#include <libavfilter/avfiltergraph.h>
#include <libavfilter/buffersrc.h> 
#include <libavfilter/buffersink.h> 
//added for AVfilter end };

然后声明初始化一些必要的结构体

//added for AVfilter start 
const char *filters_descr = "lutyuv='u=128:v=128'"; 
//const char *filters_descr = "hflip"; 
//const char *filters_descr = "hue='h=60:s=-3'"; 
//const char *filters_descr = "crop=2/3*in_w:2/3*in_h"; 
//const char *filters_descr ="drawbox=x=200:y=200:w=300:h=300:[email protected]"; //const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]"; 
//const char *filters_descr="drawgrid=width=100:height=100:thickness=4:[email protected]"; AVFilterContext *buffersink_ctx;
AVFilterContext *buffersrc_ctx;
AVFilterGraph *filter_graph;
//added for AVfilter end

现在可以初始化AVfilter了,代码比较多,对着上面的AVfilter关键函数看比较好

//added for AVfilter start const char *filters_descr = "lutyuv='u=128:v=128'"; //const char *filters_descr = "hflip"; //const char *filters_descr = "hue='h=60:s=-3'"; //const char *filters_descr = "crop=2/3*in_w:2/3*in_h"; //const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:color=pink@0.5"; //const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]"; //const char *filters_descr="drawgrid=width=100:height=100:thickness=4:color=pink@0.9"; AVFilterContext *buffersink_ctx;AVFilterContext *buffersrc_ctx;AVFilterGraph *filter_graph;//added by ws for AVfilter end
现在我们可以正式的初始化AVfilter了,代码比较多,对着上面的AVfilter关键函数看比较好

//added for AVfilter start----------init AVfilter--------------------------ws char args[512]; 
in ret;
AVFilter *buffersrc = avfilter_get_by_name("buffer");
AVFilter *buffersink = avfilter_get_by_name("buffersink");//新版的ffmpeg库必须为buffersink AVFilterInOut *outputs = avfilter_inout_alloc();
AVFilterInOut *inputs = avfilter_inout_alloc(); 
enum AVPixelFormat pix_fmts[] = { AV_PIX_FMT_YUV420P, AV_PIX_FMT_NONE };
AVBufferSinkParams *buffersink_params;
filter_graph = avfilter_graph_alloc(); /* buffer video source: the decoded frames from the decoder will be inserted here. */ snprintf(args, sizeof(args),
"video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:pixel_aspect=%d/%d",
pCodecCtx->width,
pCodecCtx->height,
pCodecCtx->pix_fmt,
pCodecCtx->time_base.num,
pCodecCtx->time_base.den,
pCodecCtx->sample_aspect_ratio.num,
pCodecCtx->sample_aspect_ratio.den);
ret = avfilter_graph_create_filter(&buffersrc_ctx, buffersrc, "in", args, NULL, filter_graph); 
if (ret < 0) {
LOGD("Cannot create buffer source\n");
return ret;
} 
/* buffer video sink: to terminate the filter chain. */ buffersink_params = av_buffersink_params_alloc();
buffersink_params->pixel_fmts = pix_fmts;
ret = avfilter_graph_create_filter(&buffersink_ctx, buffersink, "out", NULL, buffersink_params, filter_graph);
av_free(buffersink_params); 
if (ret < 0) {
LOGD("Cannot create buffer sink\n");
return ret;
}
/* Endpoints for the filter graph. */ 
outputs->name = av_strdup("in");
outputs->filter_ctx = buffersrc_ctx;
outputs->pad_idx = 0;
outputs->next = NULL;

inputs->name = av_strdup("out");
inputs->filter_ctx = buffersink_ctx;
inputs->pad_idx = 0;
inputs->next = NULL; 

// avfilter_link(buffersrc_ctx, 0, buffersink_ctx, 0); 
if ((ret = avfilter_graph_parse_ptr(filter_graph, filters_descr, &inputs, &outputs, NULL)) < 0) {
LOGD("Cannot avfilter_graph_parse_ptr\n");
return ret;
}
if ((ret = avfilter_graph_config(filter_graph, NULL)) < 0) {
LOGD("Cannot avfilter_graph_config\n"); 
return ret;
} 
//added for AVfilter end------------init AVfilter-----------------------------

初始化完成后,我们把解码器解码处理的帧进行加工改造

//added for AVfilter start 
pFrame->pts = av_frame_get_best_effort_timestamp(pFrame); //* push the decoded frame into the filtergraph 
if (av_buffersrc_add_frame(buffersrc_ctx, pFrame) < 0) {
LOGD("Could not av_buffersrc_add_frame");
break;
}
ret = av_buffersink_get_frame(buffersink_ctx, pFrame);
if (ret < 0) {
LOGD("Could not av_buffersink_get_frame");
break;
}
//added for AVfilter end

改造后的帧就是已经加上特效的,记得最后释放内存:

avfilter_graph_free(&filter_graph); //added  for avfilter

原始视频如下:
这里写图片描述
下面我们再看几张效果图,然后上源码

const char *filters_descr = "lutyuv='u=128:v=128'";

效果如下:
这里写图片描述

const char *filters_descr = "hue='h=60:s=-3'";

效果如下:
这里写图片描述

const char *filters_descr = "drawbox=x=200:y=200:w=300:h=300:[email protected]";

效果如下:
这里写图片描述

const char *filters_descr="drawgrid=width=100:height=100:thickness=4:[email protected]";

效果如下:
这里写图片描述

添加水印的filter命名如下

const char *filters_descr = "movie=/storage/emulated/0/ws.jpg[wm];[in][wm]overlay=5:5[out]";

其中movie表示水印照片文件地址,overlay的两个参数分别表示水印的位置坐标参数,效果如下所示:
这里写图片描述

源码地址:github,在ffmpegandroidavfilter模块里面,喜欢给个star哦,谢谢。

猜你喜欢

转载自blog.csdn.net/hjj378315764/article/details/81322004
今日推荐