[Audio and video processing] FFmpeg detailed explanation, command line, source code, compilation and installation

 

Hi everyone, and welcome to the Stop Refactoring channel.

In this issue we discuss FFmpeg.

Here is a question first, the FFmpeg command line is so powerful, why do you need to call the library function at a distance  ?

We discuss in this order:

1. FFmpeg command line instructions 

2. FFmpeg code structure

3. Compile and install FFmpeg 

FFmpeg command line instructions 

The FFmpeg command line can quickly implement audio and video processing , covering almost all audio and video processing functions.

Commonly used FFmpeg command lines are shown in the figure, including viewing supported codecs, transcapsulation, transcoding, live streaming of files, etc.

 

In addition, FFmpeg also provides the FFprobe tool for viewing files, track information , printing each frame information, etc.

 

The FFmpeg command line roughly corresponds to the audio and video processing process .

It is roughly divided into 5 parts: pre-parameters, input files and parameters, original frame filter processing settings, encoding settings, output files and parameters.

 

The official website has detailed descriptions and examples of relevant parameters . Generally, if you understand the audio and video processing process, the relevant parameters are relatively easy to find. For example, if you need to combine multiple videos in picture-in-picture, you can find the layout settings in the video filter.

 

If you are a novice , because FFmpeg has too many functions, it is not realistic to start from the beginning .

It is recommended to search for answers on the Internet first , and then check the official corresponding instructions . Over time, you will be impressed with many settings.

FFmpeg code structure

Before discussing the FFmpeg code structure, we need to explain the opening question.

Why is the FFmpeg command line so powerful that it needs to call its library functions to re-implement the functions?

 

This is because although FFmpeg's command line is powerful, it cannot handle some exceptions , such as live stream interruption and retrying .

In addition, some complex functions , such as live broadcast interruption to supplement the default frame, map screen access and other scenarios, cannot be realized through the command line .

 

So if you want to make a stable or more complex audio and video processing software, it is recommended to call FFmpeg library functions instead of using FFmpeg command line directly.

The code structure of FFmpeg basically corresponds to the audio and video processing process . For the audio and video processing process, please refer to the previous issue of "Video Transcoding".

 

As shown in the figure, the package processing , the corresponding library functions are basically in libavformat; the encoding and decoding processing , the corresponding library functions are basically in libavcodec; the original frame filter processing , the corresponding library functions are basically in libavfilter.

 

There are descriptions of each library function in the official website and source code, but it is not realistic to read through the code of FFmpeg, and there are many descriptions that are very vague at first glance.

 

At the beginning, it is best to refer to the corresponding sample code , and then look at the description of the corresponding library function .

 

After in-depth, you can refer to the FFmpeg command line and set relevant parameters in the corresponding functions.

 

In addition, the codes of ffmpeg command line, ffprobe, and ffplay are also in the source code, which can be referred to when necessary.

 

Of course, there are many FFmpeg code analysis on the Internet, but due to version differences, there may be no reference value, so it is best to look at the official sample code, and then trace the source code if necessary.

Compile and install FFmpeg 

There are many detailed tutorials on the Internet, and the compilation and related settings of different versions of FFmpeg are different, so here is only a general description .

If you just use the regular command line of FFmpeg , you can download the package compiled for the corresponding platform from the official website.

 

If you need to customize FFmpeg, add codecs, open dynamic libraries, open hardware codecs, etc., you need to download the source code to compile and install .

Taking Ubuntu 22.04 as an example, the compilation process is generally divided into three steps : installing basic dependencies, configuring, compiling and installing, and setting environment variables.

 

The first step is to install the basic dependencies . The most basic dependencies are only 3, but if you need other functions, such as h265 codec, you need to install the corresponding software.

 

The second step is to configure, compile and install . If there is no special requirement, it is best to use the latest version.

Compilation settings can be performed through configure. If you need to support additional functions, you need to configure them one by one. You can view the configurable items through ./configure --help.

After the configuration is complete, compile and install through the compile command.

 

The third step is to set the environment variable . If it is run by docker, you need to add the environment variable setting to the startup command.

 

After installation, tools such as ffmpeg and ffprobe can be used, and functions of related dynamic libraries can also be called.

By the way, it is common to add functions such as codecs , but FFmpeg needs to be recompiled and installed every time.

Therefore, it is more recommended to use dynamic libraries instead of compiling static libraries into your own programs, so that FFmpeg recompilation will not affect your own programs.

 

Summarize

Finally, this issue only gives a brief introduction to FFmpeg. Many people will complain that FFmpeg is too complicated, and the description of library functions is not so clear. Many times, you have to look at the source code.

The possible reason is that the audio and video itself is relatively complex, and there are many details in the processing process. However, with the deepening of our follow-up content, many problems will be clear.

Guess you like

Origin blog.csdn.net/Daniel_Leung/article/details/131472882