A simple video player for FFMPEG+SDL

1. What is FFmpeg

FFmpeg is a set of open source computer programs that can be used to record, convert digital audio and video, and convert them into streams. It provides a complete solution for recording, decoding, encoding, transcoding, multiplexing, demultiplexing, filtering audio and video data, etc. It includes libavcodec, a very advanced audio/video codec library.

2. Audio and video file playback process:

Video and audio technology mainly includes the following points: encapsulation technology, video compression coding technology and audio compression coding technology. If network transmission is considered, it also includes streaming media protocol technology. The video player needs to go through the following steps to play a video file: protocol solution, decapsulation, audio and video decoding, video and audio synchronization . The basic flow chart is as follows :

Solution protocol : When audio and video are transmitted on the network, various streaming media protocols are often used. The function of the solution protocol is to parse the data of the streaming media protocol, such as HTTP and RTMP, into the corresponding standard encapsulation format data.

Decapsulation : Separate the input encapsulation format data such as MP4, TS, FLV, AVI, etc. into audio stream compression coded data and video stream compression coded data.

Decoding : Decode video/audio compression coded data (audio compression coding standards include AAC, MP3, AC-3, etc., video compression coding standards include H.264, MPEG2,) into uncompressed video/audio original Data, such as YUV420P, RGB, PCM data.

Video and audio synchronization: Synchronize the decoded video and audio data, and send the video and audio data to the graphics card and sound card of the system for playback.

3. Some core structures of FFmpeg

AVFormatContext: The structure of the decapsulation function, including file name, audio and video stream, duration, bit rate and other information;

AVCodecContext: Codec context, a structure that must be used for encoding and decoding, including information such as codec type, video width and height, number of audio channels, and sampling rate;

AVCodec: A structure that stores codec information;

AVStream: A structure that stores audio or video stream information;

AVPacket : store audio or video encoded data;

AVFrame: store audio or video decoding data (raw data)

4. Use FFmpeg to play video process

5. Some main APIs of ffmpeg:

1. Register all container formats and decoders:  av_register_all()

2. Open the input file and unpack it:  av_open_input_file()

3. Obtain audio and video stream information from the file:  av_find_stream_info()

4. Traverse all types of streams (audio stream, video stream, subtitle stream) to find the video stream.

5. Obtain the codec context AVCodecContext in the video stream . Only when you know the encoding method of the video can you find the decoder according to the encoding method.

6. Find the corresponding decoder according to the code id in the codec context:  avcodec_find_diecoder()

7. Open the codec:  avcodec_open2()

8. Continuously extract medium frame data from the code stream:  av_read_frame()

9. Determine the type of frame and decode the video frame: a vcodec_decode_video2()

10. Convert the decoded data to pixel format and resolution sws_scale()

10. The data obtained at this time is the decoded original data such as YUV, RGB, PCM, etc., which can be output to fill by using fwrite(), or apply for a buffer area outBuffer (one frame buffer area of ​​the target format applied by oneself), Temporarily store data.

11. After decoding, release the decoder:  avcodec_close()

12. Close the input file: av_close_input_file()

6. Use SDL to output the video stream to the screen

SDL is an open source cross-platform multimedia development library. SDL provides several functions for controlling images, sounds, and I/O. At present, SDL is mostly used in the development of multimedia applications such as games, emulators, and media players.

ffmpeg is used for video codec, and SDL is for playing the decoded audio and video data. FFmpeg+SDL implements a simple video player.

7. SDL function description:

SDL_Init(); //Initialize the interface, select appropriate parameters as needed

SDL CreateWindow(); //Create the playback window

SDL_ CreateRenderer() ; //Create a renderer

SDL_ CreateTexture(); //Create texture

SDL_ UpdateTexture(); //Update texture parameters

SDL_ RenderClear(); //Clear the previous frame rendering

SDL_ RenderCopy(); //Copy the renderer

SDL_ RenderCopy(); //Render the current frame

8. Briefly explain the role of each variable:

SDL_Window is the playback window that pops up when using SDL.

SDL_Texture is used for the area where YUV texture data is displayed. One SDL_Texture corresponds to one frame of YUV data.

SDL_Renderer is used to render SDL_Texture texture to SDL_Window playback window.

SDL_Rect is used to determine the location area displayed by SDL_Texture. Note: A SDL_Texture can specify multiple different SDL_Rects, so that the same content can be displayed in different positions of the SDL_Window (using the SDL_RenderCopy() function).

Their relationship is shown in the figure below:

Rendering process:

A simple video player of the original FFMPEG+SDL-Knowledge 

★The business card at the end of the article can receive audio and video development learning materials for free, including (FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, srs) and audio and video learning roadmaps, etc.

see below!

 

Guess you like

Origin blog.csdn.net/yinshipin007/article/details/132070611
Recommended