ffplay source code analysis (a): PacketQueue in AVPacket and AVFrame relations

ffplay source address: http://ffmpeg.org/doxygen/trunk/ffplay_8c_source.html

There are two queues ffplay a PacketQueue, a FrameQueue, let's relationship to AVPacket and AVFrame queue for analysis and explanation.

A, AVPacket AVFrame structure and meaning

AVPacket

For storing compressed data, the compressed data include audio, compressed video data and subtitle data compression. It is usually in the demultiplexing operation of storing the compressed data, and then passed as input to the decoder. Or the output from the encoder and passed to the multiplexer. For video data compression, a AVPacket typically includes a video frame. For compressed audio data, compressed audio may include several frames.

AVFrame

For audio or video data decoded is stored. AVFrame must be allocated by av_frame_alloc, released through av_frame_free.

The relationship between the two

av_read_frame resulting compressed data packet AVPacket, there are generally three compressed packet (video, audio, and subtitles), are represented by AVPacket.

Then call avcodec_send_packet and avcodec_receive_frame to AVPacket be decoded AVFrame.

Note: FFmpeg 3.x From the beginning, avcodec_decode_video2 was abandoned, replaced by avcodec_send_packet and avcodec_receive_frame.

Two, ffplay queue relationship

There are three ffplay PacketQueue, each video packet queue is a queue of packets, audio packets and subtitle queues.

Accordingly, there are three types FrameQueue, video frame queues, audio and subtitle frame queue frame queues.

Initialization is carried out in the work queue stream_open function, respectively, and the initialization operation is performed by packet_queue_init frame_queue_init. Note that, without manual initialization PacketQueue AVPacket assigned structure, but directly used in the decoding process AVPacket. In FrameQueue it is assigned manually by av_frame_alloc AVFrame structure.

In read_thread function, the read data packets through av_read_frame function, then the call packet_queue_put AVPacket added to the PacketQueue.

In video_thread function, data is read frame by get_video_frame function, then the call queue_picture AVFrame added to the FrameQueue.

Then the two queues is how to link it? You can be known by analyzing read_thread function:

First, create a de-multiplexing and decoding data structures required. Then, three data stream through each open stream_component_open function. Finally, the data packet is demultiplexed separately added to the corresponding PacketQueue by av_read_frame. In stream_component_open function is mainly responsible for decoding work, ffplay for decoding work specifically set up a data structure Decoder, Decoder structure has a member of the queue, the queue is referring PacketQueue input to specify PacketQueue by decoder_init function. This work is carried out in stream_component_open in. After specifying PacketQueue decoding function get_video_frame out from PacketQueue AVFrame structure, and finally added to the decoded frame obtained by queue_picture FrameQueue function.

Guess you like

Origin www.cnblogs.com/renhui/p/12217958.html