ffmpeg command parsing

Introduction
FFMPEG is particularly powerful designed to handle audio and video open source library. You can either use its API for audio and video processing, you can use the tools it provides, such as ffmpeg, ffplay, ffprobe, edit your audio and video files.
This article will briefly describe the basic directory structure and function FFMPEG library, and then explain in detail in our daily work, how to use the tools provided to handle the ffmpeg audio and video files.
FFMPEG directory and action
• libavcodec: provides an implementation of a series of encoders.
• libavformat: to achieve access streaming protocol, and container format this IO.
• libavutil: including the hash is, ××× and Lee each tool functions.
• libavfilter: offers a variety of audio and video filters.
• libavdevice: provides an interface to access devices capture and playback devices.
• libswresample: to achieve a mixing and resampling.
• libswscale: achieve the color conversion and scaling work can be.
FFMPEG basic concepts
before explaining FFMPEG command, we need to introduce some basic concepts of audio and video formats.
• audio / video streaming
audio and video fields, we put all the way audio / video stream called all the way. We often use the VCD as a child watching Hong Kong films, can be selected in Cantonese or Mandarin sound in it, in fact, CD video files stored in the two audio streams, the user can choose which way to play.
• container
we generally MP4, FLV, MOV file format called container. That is, the common file format, you can store multiple audio and video files. To MP4 for example, you can store all the way streaming video, multi-channel audio streams, multiple subtitle streams.
Channel •
Channel audio concept, called channels. In one audio stream, there may be mono, two-channel or stereo.
FFMPEG command
we press the intended use FFMPEG command can be divided into the following categories:
• Basic information query
• Record
• decomposition / multiplexing
• raw data
• Filters
• cutting and consolidation
• Figure / video system conversion
• Live-related
addition of FFMPEG basic information of this query command, the other commands are process flow shown in FIG pressed audio and video.
ffmpeg command parsing
Then the encoded data packet to ××× (except for the selected data stream stream copy, see further below). ××× uncompressed frame is generated (the original video / PCM audio / ...) can be further processed by the filter (see next section). After filtration, the frame is transmitted to the encoder, and the encoder outputs the encoded data packet. Finally, these delivery to the multiplexer, the encoded data packet to the output file.
By default, each containing only FFmpeg type (video, audio, subtitle) of a file input stream, and add it to each of the output file. It is a selection of each of the "best" according to the following criteria: For video, it is the highest resolution streams for audio, it has the largest channel flow, for subtitles, was the first subtitle stream. In case of equal number of the same type of flow, the flow having the lowest index selection.
You can disable some of the default settings by using -vn / -an / -sn / -dn option. To conduct a comprehensive manual control, use the -map option, the option is disabled by default settings just described.
Here we have to explain in detail these commands.
Basic information query commands
FFMPEG can use the following parameters basic information inquiries. For example, to check what are now using FFMPEG support which filter, you can use ffmpeg -filters to query. Detailed parameters as follows:
Parameter Description
-version display version.
-formats available display format (including devices).
-demuxers display demuxers available.
-muxers display muxers available.
-devices show available devices.
-codecs display libavcodec known all compiled ×××.
-decoders display ××× available.
-encoders displays all the available encoders.
-bsfs display bitstream filter available.
-protocols display protocol available.
-filters display libavfilter filter available.
-pix_fmts available display pixel format.
-sample_fmts display the available sampling format.
-layouts standard display channel name and channel layout.
-colors displaying the recognized color names.
Next comes is the command format and parameters used when processing audio and video FFMPEG.
The basic format of the command and parameters
are the basic command format of FFMPEG:
FFmpeg [the global_options] {[input_file_options] -i} ... input_url
{[output_file_options] output_url} ...

ffmpeg -i option reads input by any number of input "File" (the file may be conventional, pipes, network flow, capture apparatus, etc., and any number of output write "File."
In principle, each input / output different types of "files" may include any number of video streams (video / audio / subtitle / accessories /). the number of flows and / or type is restricted by the container format. choose which input from to which output is automatically complete or -map option.
to reference the input file option, you must use their index (starting from 0), for example. the first input file is 0, the second input files is 1, and so on. Similarly stream file referenced in the index thereof, for example, 2: 3 refers to the third to the fourth input file stream.
the above process is frequently used commands FFMPEG audio and video, here are some common parameters:
main parameters
parameter Description
-f fmt (input / output) forced input or output file format is a format generally automatically detected input file and guessed from the file extension of the output file, the In most cases this option is not needed.
URL -i url (input) of the input file
-y (global parameters) covering the output file without asking.
-N (global parameters) do not overwrite the output file, if the specified output file already exists, exit immediately.
-C [: stream_specifier] CODEC (input / output, each stream) to select an encoder (when used in the output file before) or ××× (when used before the input file) by one or more flow .codec is ××× / or the name of the encoder copy (output only) to indicate that the stream is not re-encoded as:. ffmpeg -i INPUT -map 0 -c : v libx264 -c: a the oUTPUT Copy
-codec [: stream_specifier] ed ××× (input / output, each stream) with -c
-t duration (input / output) When used as input options (before -i), the limit the duration of data read from the input file. When used as an output option (output before url), duration of stop output after arrival duration.
-ss position (input / output) When used as input option, to find a position in the input file (prior -i). Note that in most formats, the search can not be precise, so ffmpeg will find the nearest search point before the position. When transcoding and -accurate_seek is enabled (default), this additional search points between the segments and the position to be decoded and discarded. When copying or streaming -noaccurate_seek, it will be retained. When used as an output option, but discards decoded input (URL prior to output), until it reaches the position of the stamp.
-frames [: stream_specifier] framecount (output , per-stream) stops writing the stream after the frame count frame.
-filter [: stream_specifier] filtergraph (output , per-stream) specified by the created filtergraph FIG filter, and use it to filter the stream. filtergraph applied is described filtergraph stream, and must have a single input and a single output stream of the same type. In the filter pattern, the input tag associated with the label, an output label associated with the tag. For more information about the syntax filtergraph, please refer to the ffmpeg-filters Manual.
Video Parameters
Parameter Description
-vframes num (output) to set the number of output video frames. For -frames: v, this is an outdated alias, you should use it.
-r [: stream_specifier] fps (input / output, each stream) is provided a frame rate (Hz value, fraction or abbreviation). As any timestamp input options, ignoring stored in the file, according to the rate of generating new timestamp. This option is used -framerate different (it uses the old version of FFmpeg is the same). If in doubt, use -framerate instead of input options -r. As an output option to copy or discarding frames to achieve a constant input output frame rate fps.
-s [: stream_specifier] Size (input / output, each stream) set window size. As input option, this is a shortcut video_size specific options identified by certain framer, its frame size is not stored in the file. As an output option, which will scalable video filter inserted at the end of a respective filter pattern. Use in direct proportion to the filter insert to the beginning or elsewhere. The format is 'wxh' (default - the same source).
-aspect [: stream_specifier] aspect ratio (output, each stream) specified in terms of setting the video display aspect ratio. aspect can be floating string, it may be num: den in the form of a string, and wherein the num den aspect ratio is the numerator and denominator. For example, "4: 3", "16: 9", "1.3333" and "1.7777" is a valid parameter value. If used with -vcodec copy, it will influence the aspect ratio stored in the container level, but does not influence the aspect ratio stored in the coded frames (if present).
-vn (output) disable video recording.
-vcodec ed ××× (output) video coding is provided ×××. This is -codec: v alias.
-vf filtergraph (output) created by the specified filtergraph FIG filter, and use it to filter the stream.
Audio parameters
Parameter Description
-aframes (output) arranged to output the audio frame number. This is -frames: an outdated alias of a.
-ar [: stream_specifier] freq (input / output, each stream) disposed audio sampling frequency. For the output stream, the default setting of the frequency corresponding to the input stream. For the input stream, this option is only original audio capture device and splitters, and mapped to the appropriate splitter option.
-ac [: stream_specifier] channel (input / output, each of the flow) the number of audio channels is provided. For the output stream, the number of input audio channels default setting. For the input stream, this option is only original audio capture device and splitters, and mapped to the appropriate splitter option.
-an (output) disables recording.
-acodec ed ××× (input / output) audio codec provided ×××. This is -codec alias: a.
-sample_fmt [: stream_specifier] sample_fmt (outputs, each stream) format audio sample is provided. Use -sample_fmts get a list of sample formats supported.
-af filtergraph (output) created by the specified filtergraph FIG filter, and use it to filter the stream.
Understanding of these basic information, then we can look at specific FFMPEG doing it.
Recorded
first by following commands check which devices are on mac.
ffmpeg -f avfoundation -list_devices true -i ""
record screen
ffmpeg -f -r 1 -i AVFoundation 30 out.yuv
• -f specifies the use of data collection avfoundation.
• -i to specify where to collect data, it is a file index number. On my MAC, 1 representative of the desktop (you can query the device number via the above command).
• -r specifies the frame rate. According to official documents say ffmpeg -r and -framerate the same effect, but found different when the actual test. -framerate for limiting the input, and the output for limiting -r.
Note that the desktop does not require the input frame rate, so do not limit your desktop frame rate. In fact, limiting useless.
Record screen + sound
ffmpeg -i -f AVFoundation 1: 29.97 0 -r -c: v libx264 CRF 0 -c: A libfdk_aac -profile: A aac_he_v2 -b: A 32K out.flv
• -i 1: 0 in front of the colon "1" screen index number represents. The sound cable with number "0" representative after the colon.
• -c: v parameters -vcodec Like, the video encoder. c is an abbreviation codec, v is the video acronym.
• -crf is the parameter of x264. 0 tabular lossless compression.
• -c: a parameter -acodec in expressing audio encoder.
• -profile is fdk_aac parameters. aac_he_v2 tabular AAC_HE v2 using the compressed data.
• -b: a designated audio bit rate. b is an abbreviation of bitrate, a is an audio and shrink.
Recorded video
ffmpeg -framerate AVFoundation -i 30 -f 0 out.mp4
• -framerate limit acquisition frame rate of the video. This must be set according prompted, if not set will error.
• -f specifies the use of data collection avfoundation.
• -i Specifies the index number of the video equipment.
Video + Audio
ffmpeg -framerate 30 -f avfoundation -i 0: 0 out.mp4
recording
ffmpeg -f avfoundation -i: 0 out.wav
recording audio data bare
ffmpeg -f avfoundation -i: 0 -ar 44100 -f s16le out.pcm
Decomposition multiplexed
stream copy mode is selected by the copy stream parameters to -codec option. It makes ffmpeg stream decoding and encoding step is omitted specified, so it can only be demultiplexed and multiplexed. This container format change or modify container-level metadata is useful. In this case, the figure reduces to:
ffmpeg command parsing

Since no decoding or encoding is very fast, without loss of quality. However, due to many factors and may not work in some cases. Apply Filter obviously also impossible because the filter process uncompressed data.
Extracting an audio stream
FFmpeg -i input.mp4 -acodec Copy -vn out.aac
• acodec: Specifies the audio encoder, indicates Copy copy only, not the codec.
• vn: v represents the video, on behalf of n no video is no meaning.
Extracting video stream
FFmpeg -i input.mp4 -vcodec Copy -an out.h264
• vcodec: specifying a video encoder, indicates Copy copy only, not the codec.
• an: a video on behalf of, on behalf of n no is no audio means.
Transfer format
ffmpeg -i out.mp4 -vcodec copy -acodec copy out.flv
above formula command table is audio, video Copy directly, simply turned into flv mp4 encapsulation format.
The combined audio and video
ffmpeg -i out.h264 -i out.aac -vcodec copy -acodec copy out.mp4
raw data
to extract the YUV data
FFmpeg input.mp4 -an -C -i: V rawvideo as well -pixel_format YUV420P out.yuv
ffplay - of WXH out.yuv S
• -c: v rawvideo as well to specify the video to the original data
• -pixel_format yuv420p into a format designated YUV420P
turn the YUV the H264
FFmpeg rawvideo as well -pix_fmt YUV420P -s 320x240 -f -R & lt out.yuv -C 30 -i: V libx264 -f rawvideo as well out.h264
extracts the PCM data
ffmpeg -i out.mp4 44100 -ac -f 2 -Ar -vn s16le out.pcm
ffplay 44100 -ac -Ar 2 -i -f s16le out.pcm
the PCM transfected WAV
FFmpeg -f s16be 8000 -ac -Ar 2 -acodec pcm_s16be -i input.raw output.wav
filter
prior to encoding, ffmpeg can use filters libavfilter library processing the raw audio and video frames. Several filter chain forming a filter pattern. ffmpeg distinguish two types of graphics filters: simple and complex.
Simple filters
simple filter with only one figure is input and output, are the same type. In the above figure, they may be represented by the insertion of an additional decoding and encoding step between:

ffmpeg command parsing
Simple filtergraphs configured per-stream-filter option (respectively video and audio using the -vf and -af alias). A simple video filtergraph can look like this example:
ffmpeg command parsing

请注意,某些滤镜会更改帧属性,但不会改变帧内容。 例如。 上例中的fps过滤器会改变帧数,但不会触及帧内容。 另一个例子是setpts过滤器,它只设置时间戳,否则不改变帧。
复杂滤镜
复杂的过滤器图是那些不能简单描述为应用于一个流的线性处理链的过滤器图。 例如,当图形有多个输入和/或输出,或者当输出流类型与输入不同时,就是这种情况。 他们可以用下图来表示:

ffmpeg command parsing

复杂的过滤器图使用-filter_complex选项进行配置。 请注意,此选项是全局性的,因为复杂的过滤器图形本质上不能与单个流或文件明确关联。
-lavfi选项等同于-filter_complex。
一个复杂的过滤器图的一个简单的例子是覆盖过滤器,它有两个视频输入和一个视频输出,包含一个视频叠加在另一个上面。 它的音频对应是amix滤波器。
添加水印
ffmpeg -i out.mp4 -vf "movie=logo.png,scale=64:48[watermask];[in][watermask] overlay=30:10 [out]" water.mp4
• -vf中的 movie 指定logo位置。scale 指定 logo 大小。overlay 指定 logo 摆放的位置。
删除水印
先通过 ffplay 找到要删除 LOGO 的位置
ffplay -i test.flv -vf delogo=x=806:y=20:w=70:h=80:show=1
使用 delogo 滤镜删除 LOGO
ffmpeg -i test.flv -vf delogo=x=806:y=20:w=70:h=80 output.flv
视频缩小一倍
ffmpeg -i out.mp4 -vf scale=iw/2:-1 scale.mp4
• -vf scale 指定使用简单过滤器 scale,iw/2:-1 中的 iw 指定按整型取视频的宽度。 -1 表示高度随宽度一起变化。
视频裁剪
ffmpeg -i VR.mov -vf crop=in_w-200:in_h-200 -c:v libx264 -c:a copy -video_size 1280x720 vr_new.mp4
crop 格式:crop=out_w:out_h:x:y
• out_w: 输出的宽度。可以使用 in_w 表式输入视频的宽度。
• out_h: 输出的高度。可以使用 in_h 表式输入视频的高度。
• x : X坐标
• y : Y坐标
如果 x和y 设置为 0,说明从左上角开始裁剪。如果不写是从中心点裁剪。
倍速播放
ffmpeg -i out.mp4 -filter_complex "[0:v]setpts=0.5PTS[v];[0:a]atempo=2.0[a]" -map "[v]" -map "[a]" speed2.0.mp4
• -filter_complex 复杂滤镜,[0:v]表示第一个(文件索引号是0)文件的视频作为输入。setpts=0.5
PTS表示每帧视频的pts时间戳都乘0.5 ,也就是差少一半。[v]表示输出的别名。音频同理就不详述了。
• map 可用于处理复杂输出,如可以将指定的多路流输出到一个输出文件,也可以指定输出到多个文件。"[v]" 复杂滤镜输出的别名作为输出文件的一路流。上面 map的用法是将复杂滤镜输出的视频和音频输出到指定文件中。
对称视频
ffmpeg -i out.mp4 -filter_complex "[0:v]pad=w=2*iw[a];[0:v]hflip[b];[a][b]overlay=x=w" duicheng.mp4
• hflip 水平翻转
如果要修改为垂直翻转可以用vflip。
画中画
ffmpeg -i out.mp4 -i out1.mp4 -filter_complex "[1:v]scale=w=176:h=144:force_original_aspect_ratio=decrease[ckout];[0:v][ckout]overlay=x=W-w-10:y=0[out]" -map "[out]" -movflags faststart new.mp4
录制画中画
ffmpeg -f avfoundation -i "1" -framerate 30 -f avfoundation -i "0:0"
-r 30 -c:v libx264 -preset ultrafast
-c:a libfdk_aac -profile:a aac_he_v2 -ar 44100 -ac 2
-filter_complex "[1:v]scale=w=176:h=144:force_original_aspect_ratio=decrease[a];[0:v][a]overlay=x=W-w-10:y=0[out]"
-map "[out]" -movflags faststart -map 1:a b.mp4
多路视频拼接
ffmpeg -f avfoundation -i "1" -framerate 30 -f avfoundation -i "0:0" -r 30 -c:v libx264 -preset ultrafast -c:a libfdk_aac -profile:a aac_he_v2 -ar 44100 -ac 2 -filter_complex "[0:v]scale=320:240[a];[a]pad=640:240[b];[b][1:v]overlay=320:0[out]" -map "[out]" -movflags faststart -map 1:a c.mp4
音视频的拼接与裁剪
裁剪
ffmpeg -i out.mp4 -ss 00:00:00 -t 10 out1.mp4
• -ss 指定裁剪的开始时间,精确到秒
• -t 被裁剪后的时长。
合并
首先创建一个 inputs.txt 文件,文件内容如下:
file '1.flv'
file '2.flv'
file '3.flv'
然后执行下面的命令:
ffmpeg -f concat -i inputs.txt -c copy output.flv
hls切片
ffmpeg -i out.mp4 -c:v libx264 -c:a libfdk_aac -strict -2 -f hls out.m3u8
• -strict -2 指明音频使有AAC。
• -f hls 转成 m3u8 格式。
视频图片互转
视频转JPEG
ffmpeg -i test.flv -r 1 -f image2 image-%3d.jpeg
视频转gif
ffmpeg -i out.mp4 -ss 00:00:00 -t 10 out.gif
图片转视频
ffmpeg -f image2 -i image-%3d.jpeg images.mp4
直播相关
推流
ffmpeg -re -i out.mp4 -c copy -f flv rtmp://server/live/streamName
拉流保存
ffmpeg -i rtmp://server/live/streamName -c copy dump.flv
转流
ffmpeg -i rtmp://server/live/originalStream -c:a copy -c:v copy -f flv rtmp://server/live/h264Stream
实时推流
ffmpeg -framerate 15 -f avfoundation -i "1" -s 1280x720 -c:v libx264 -f flv rtmp://localhost:1935/live/room
ffplay
播放YUV 数据
ffplay -pix_fmt nv12 -s 192x144 1.yuv
播放YUV中的 Y平面
ffplay -pix_fmt nv21 -s 640x480 -vf extractplanes='y' 1.yuv

Guess you like

Origin blog.51cto.com/14367739/2402523
Recommended