[From zero-impact audio and video development] Introduction and basic use of FFmpeg

foreword

It’s been a while since I’ve written this article, but I can’t finish my homework. I’ll probably update my facebook at most before next Wednesday, because I’m going to have the three sides of TikTok tonight, and Ali’s next Monday. It's time for HR. On Saturdays and Sundays, I haven't started my computer and circuit homework. If I don't study, I will be cold. The time is really tight, please forgive me.

Article Content References "Advanced Guide to Audio and Video Development--Practice Based on Android and iOS Platforms"

Table of contents

[1] [From zero-impact audio and video development] necessary knowledge base for audio and video development

【2】【From zero-impact audio and video development】Mobile environment construction

【3】[From zero impact audio and video development] Introduction and basic use of FFmpeg

FFmpeg environment construction

材料清单:
1. 10.15.2的OSX系统
2. HomeBrew(安装姿势:/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)")
3. ffmpeg的依赖(安装姿势: brew install ffmpeg;文件路径: /usr/local/Cellar)
4. ffmpeg-4.2.2的源码(官网下)
5. NDK-r20

This is an environment for local use, and there are many samples in it, but it is not recommended to use it for development.

I have used it many times, .aand the static library XCodecan be played directly on the Internet, but it cannot be integrated directly on the AS. So here is a detailed cross-compilation article.

Introduction to FFmpeg module

bin

Where the command-line tools are located, ffplay, ffmpeg, , ffprobeand so on.

// ffprobe各种命令
> ffprobe 文件 // 显示文件头信息
> ffprobe -show_format 文件 // 显示输出格式、时间长度、文件大小、比特率等信息
> ffprobe -pirnt_format json -show_streams 文件 // 以json格式输出具体每一个流的信息
> ffprobe -show_frames 文件 // 显示帧信息
> ffprobe -show_packets 文件 // 显示包信息
//。。。。。

// ffplay
> ffplay 文件 // 播放
> ffplay 文件 -loop 10 // 循环播放10次
> ffplay 文件 -ast 0 // 播放第0路音频流,其他路没有流的话会静音
> ffplay 文件 -vst 0 // 播放第0路视频流,其他路没有流的话会黑屏
> ffplay 文件.pcm -f s16le -channels 2 -ar 44100 // 播放pcm格式文件的方式,需要设置格式(-f),声道数(-channels),采样率(-ar)
> ffplay -f rawvideo -pixel_format yuv420p -s 480*480 文件.yuv(文件.rgb) // 查看一帧的视频帧(这个没有调试通过)
> ffplay 文件 -sync audio // 以音频为基准进行音视频同步(默认方案)
> ffplay 文件 -sync video // 以视频为基准进行音视频同步
> ffplay 文件 -sync ext // 以外部时钟为基准进行音视频同步
> ffplay 文件 -ss 50 // 跳过50s的内容
//。。。。。

// ffmpeg
// 会有很多的通过参数以图片给出,具体使用后期会慢慢看到,就不再演示

inlcude -> 8 modules

It stores the header files of the compiled static library files

AVCodec:  used for encoding and decoding
AVDevice:  input and output device
AVFilter:  audio and video filter library, providing audio and video special effects processing.
AVFormat:  File format and protocol library. Encapsulates Protocollayers and Demuxerlayers Muxer.
AVResample:  used for audio resampling  (the book says that the old version will be compiled, which has been discarded, but my version is the latest and needs to be investigated)
AVUtil:  the core tool
PostProc:  used for post-processing, it is AVFiltera module to be opened when using .
SwResample:  It is used for audio resampling, and converts basic information such as the number of channels, data format, and sampling rate of audio.
SWScale:  convert the image format, such as YUV ->  RGB

lib

It stores the compiled static library files, which will be used in the linking phase.

share

In fact, it is a examplesingle location, which is used to explain FFmpegthe use of each tool, as well as use samples and so on.

Use of FFmpeg

A lot of things have been talked about before, but Javahow we actually want to use it in the code has not been done at all, so what we need to do here is Javathe test in the code.

Step 1: import package and reference

Then CMakeList.txtimport what we need to load so.

cmake_minimum_required(VERSION 3.4.1)

# 定义变量
set(ffmpeg_lib_dir ${CMAKE_SOURCE_DIR}/../jniLibs/${ANDROID_ABI})
set(ffmpeg_head_dir ${CMAKE_SOURCE_DIR}/ffmpeg)

add_library( # Sets the name of the library.
        audioencoder
        SHARED
        # lame
        lame/bitstream.c lame/encoder.c lame/gain_analysis.c
        lame/lame.c lame/id3tag.c lame/mpglib_interface.c
        lame/newmdct.c lame/presets.c lame/psymodel.c
        lame/quantize.c lame/fft.c lame/quantize_pvt.c
        lame/reservoir.c lame/set_get.c lame/tables.c
        lame/takehiro.c lame/util.c lame/vbrquantize.c
        lame/VbrTag.c lame/version.c
        # mine
        audioencoder/audioencoder.cpp
        audioencoder/mp3_encoder.cpp)

# 添加ffmpeg相关的so库
add_library( avutil
        SHARED
        IMPORTED )
set_target_properties( avutil
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libavutil.so )

add_library( swresample
        SHARED
        IMPORTED )
set_target_properties( swresample
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libswresample.so )

add_library( avcodec
        SHARED
        IMPORTED )
set_target_properties( avcodec
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libavcodec.so )

add_library( avfilter
        SHARED
        IMPORTED)
set_target_properties( avfilter
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libavfilter.so )

add_library( swscale
        SHARED
        IMPORTED)
set_target_properties( swscale
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libswscale.so )

add_library( avformat
        SHARED
        IMPORTED)
set_target_properties( avformat
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libavformat.so )

add_library( avdevice
        SHARED
        IMPORTED)
set_target_properties( avdevice
        PROPERTIES IMPORTED_LOCATION
        ${ffmpeg_lib_dir}/libavdevice.so )


find_library( # Sets the name of the path variable.
        log-lib
        log)

# 引入头文件
include_directories(${ffmpeg_head_dir}/include)

target_link_libraries( # Specifies the target library.
        audioencoder
        # ffmpeg
        avutil
        swresample
        avcodec
        avfilter
        swscale
        avformat
        avdevice

        ${log-lib})

Step Two: Use

Of course, the usage plan is still the same, but here we need to pay attention, because it FFmpegis Cwritten, but what we compile is used C++, so extern "C"it is necessary to apply a layer.

The source code here is directly copied from Brother Mao, as a demonstration.

GithubI brought FFmpega sample for decoding ->, of course, there are many functions in it that I feel are useless. I mainly made an explanation and looked at the key points in detail. Of course, I also raised some questions in the source code. If mp3you pcmIf I know how to solve it, or if I have any new problems, I will continue to explore by posting comments, emails, etc.
After I understand it almost, I will start another project and develop it by myself. I hope you can do this learning project star, hehehe.
 

The above are my learning results. If there is anything I haven't thought about or there are mistakes in the article, please share with me.

Original text  [From zero-impact audio and video development] Introduction and basic use of FFmpeg - Nuggets

★The business card at the end of the article can receive audio and video development learning materials for free, including (FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, srs) and audio and video learning roadmaps, etc.

see below!

 

Guess you like

Origin blog.csdn.net/yinshipin007/article/details/131112422