FFmpeg decoded video of audio and video programming, save the video as a picture

Basic knowledge

Video playback process

To play a video file on the Internet, a video player needs to go through the following steps: de-protocol, de-encapsulate, decode video and audio, and synchronize video and audio. If you play a local file, you don't need to resolve the agreement, and it involves the following steps: unpacking, decoding video and audio, and video and audio synchronization.
Video playback process
The function of the solution protocol is to parse the data of the streaming media protocol into the corresponding standard encapsulation format data.
The function of decapsulation is to separate the input data in the encapsulated format into audio stream compression coded data and video stream compression coded data.
The function of decoding is to compress the encoded video/audio data and decode it into uncompressed video/audio raw data.
The function of video and audio synchronization is to synchronize the decoded video and audio data according to the parameter information obtained during the processing of the decapsulation module, and send the video and audio data to the graphics card and sound card of the system for playback.

FFmpeg decoding process

1. Register the components of FFmpeg
2. Assign a FFmpeg context AVFormatContext structure
3. Open a video file
4. Find the video stream
5. Open a decoder according to the video stream for decoding
6. Decode and read the video frame data
7. Change Encode the compressed video frame and decode it into the original video frame data
8. Display the video frame

FFmpeg decoding process

①Initialize the FFmpeg environment and context
②Open a video file and find the video stream
③Find and open the video stream decoder according to the found video stream.
④Read the data frame from the video stream
⑤If the video frame has not been read, skip to ④
⑥Process the video frame data
⑦Skip to ④
⑧Release the requested FFmpeg resource

FFmpeg decoding function

FFmpeg decoding function (important)
av_register_all(): Register all components.
▫ avformat_open_input(): Open the input video file.
▫ avformat_find_stream_info(): Get video file information.
▫ avcodec_find_decoder(): Find decoder.
▫ avcodec_open2(): Open the decoder.
▫ av_read_frame(): Read a frame of compressed data from the input file.
▫ avcodec_decode_video2(): Decode one frame of compressed data.
▫ avcodec_close(): Close the decoder.
▫ avformat_close_input(): Close the input video file

FFmpeg composition

FFmpeg contains a total of 8 libraries:
▫ avcodec: Codec (the most important library).
▫ avformat: package format processing.
▫ avfilter: filter special effects processing.
▫ avdevice: input and output of various devices.
▫ avutil: Tool library (most libraries need the support of this library).
▫ postproc: post-processing.
▫ swresample: audio sample data format conversion.
▫ swscale: video pixel data format conversion.

FFmpeg basic data structure

▫ AVFormatContext
 Encapsulation format context structure, which is also the structure that governs the whole world, and saves the
relevant information of the video file encapsulation format.
▫ AVInputFormat
 Each package format (such as FLV, MKV, MP4, AVI) corresponds to this structure.
▫ AVStream
 Each video (audio) stream in the video file corresponds to this structure.
▫ AVCodecContext
 Encoder context structure, which saves video (audio) codec related information.
▫ AVCodec
 Each video (audio) codec (such as H.264 decoder) corresponds to this structure.
▫ AVPacket
 stores one frame of compressed coded data. That is, the data after audio and video compression
▫ AVFrame
 stores a frame of decoded pixel (sample) data, which is the original audio and video data

FFmpeg data structure supplement

FFmpeg data structure analysis
▫ AVFormatContext
 iformat: AVInputFormat of the input video
 nb_streams: number of AVStreams of the input video
 streams: AVStream of the input video [] array
 duration: duration of the input video (in microseconds)
 bit_rate: input Video bit rate
▫ AVInputFormat
 name: package format name
 long_name: long name of package format
 extensions: package format extensions
 id: package format ID
 some package format processing interface functions
AVStream
 id: serial number
 codec: The AVCodecContext corresponding to the stream
 time_base: the time base of the stream
 r_frame_rate: the frame rate of the stream
▫ AVCodecContext
 codec: the AVCodec of the codec
 width, height: the width and height of the image (only for video)
 pix_fmt: pixel format (Only for video)
 sample_rate: sample rate (only for audio)
 channels: number of channels (only for audio)
 sample_fmt: sample format (only for audio)
▫ AVCodec
 name: codec name
 long_name: codec long name
 type: codec type
 id: codec ID
 some codec interface functions
AVPacket
 pts: display timestamp
 dts: decoding time Stamp
 data: compressed coded data
 size: compressed coded data size
 stream_index: belonging to the AVStream
▫ AVFrame
 data: decoded image pixel data (audio sample data).
 linesize: For video, it is the size of a line of pixels in the image; for audio, it is
the size of the entire audio frame.
 width, height: the width and height of the image (only for video).
 key_frame: Whether it is a key frame (only for video).
 pict_type: frame type (only for video). For example, I, P, B.

FFmpeg decoding flowchart

Insert picture description here

Refer to the blog link

https://blog.csdn.net/leixiaohua1020/article/details/18893769
https://blog.csdn.net/qq_15893929/article/details/83009572

Instance

Guess you like

Origin blog.csdn.net/weixin_39308337/article/details/106962777