With questions, read ijkplayer source

problem

  • The difference between the main flow

  • Buffer design

  • Memory management logic

  • Audio and video playback mode

  • Audio and video synchronization

  • The problem seek: the flush buffers, playback time display, inaccurate positioning problems when large frame pitch k ...

  • How to release resources when stop, whether to switch to the secondary thread?

  • When dealing with bad network, such as access to frame slower than the rate of consumption, if not suspended, it will be consistent Caton, will take the initiative to suspend?

  • How universal decoder for decoding ffmpeg and VTB's? How to design the architecture?

Data flow

A more detailed look at the main flow of the main flow analysis ijkPlayer

Audio frequency

  • av_read_frame

  • packet_queue_put

  • audio_thread+decoder_decode_frame+packet_queue_get_or_buffering

  • frame_queue_peek_writable+frame_queue_push

  • audio_decode_frame+frame_queue_peek_readable,数据到is->audio_buf

sdl_audio_callback, introduced into the data stream in parameter. This function is a function of the upper buffer filling audio playback library, such as iOS in use audioQueue, IJKSDLAudioQueueOuptutCallback callback function calls here, then the incoming data to audioQueue.

video

As part of the read packet

video_thread, then ffpipenode_run_sync in hardware decoding target videotoolbox_video_thread, then ffp_packet_queue_get_or_buffering read.

VTDecoderCallback decoded in the callback, SortQueuePush (ctx, newFrame); pixelBuffer the decoded loaded into an ordered queue.

GetVTBPicture frame from an ordered queue of the package out, the ordered queue is only a temporary means for sorting fills, this idea can be absorbed; queue_picture where the decoded frame into frame buffer is
displayed video_refresh + video_image_display2 + [IJKSDLGLView display: ]

The final render texture generation on the inside, pixelBuffer to vtb in yuv420sp_vtb_uploadTexture. Use render this role, rendered parts are abstracted away. shader in IJK_GLES2_getFragmentShader_yuv420sp

Conclusion: There is no big difference on the primary process.

Buffer design

packetQueue:

1, data structure design

packetQueue using two linked list is a linked list to save data, a linked list of nodes is multiplexed, data storage nodes that do not. To the linked list data from first_pkt last_pkt, inserted into the latter last_pkt data, whichever is to take data from first_pkt. At the beginning of the list is recycle_pkt multiplexing, taking the empty node after the data, into the empty list recycle_pkt head, and then become the new empty node recycle_pkt. When the stored data is also multiplexed from a node recycle_pkt.
Node list such as the box, when loading data into the data list, the data is taken back to the multiplexing chain.

2, blocking out of control

Take the time data may not, then there are several treatment: direct return, blocking wait. It handles here is blocked waiting for, and will pause video playback. So the answer to question 8, the outside see the effect is this: When a network card, it will stop playing and then smooth playback for a while, then went on to Caton, Caton play and is clearly separated.
Enter data when congestion control and did not do, and why data is not infinite expansion?
It is blocked, but blocking is not in packetQueue inside, but readFrame function in:

if (ffp->infinite_buffer<1 && !is->seek_req &&
             (is->audioq.size + is->videoq.size + is->subtitleq.size > ffp->dcc.max_buffer_size
           || (   stream_has_enough_packets(is->audio_st, is->audio_stream, &is->audioq, MIN_FRAMES)
               && stream_has_enough_packets(is->video_st, is->video_stream, &is->videoq, MIN_FRAMES)
               && stream_has_enough_packets(is->subtitle_st, is->subtitle_stream, &is->subtitleq, MIN_FRAMES)))) {
           if (!is->eof) {
               ffp_toggle_buffering(ffp, 0);
           }
           /* wait 10 ms */
           SDL_LockMutex(wait_mutex);
           SDL_CondWaitTimeout(is->continue_read_thread, wait_mutex, 10);
           SDL_UnlockMutex(wait_mutex);
           continue;
       }

Simplified view is this:

  • infinite_buffer not unlimited buffer

  • is-> audioq.size + is-> videoq.size + is-> subtitleq.size> ffp-> dcc.max_buffer_size, make use of the data size limit

  • The number of restricted use of data do stream_has_enough_packets

Because the number is set to 50,000, usually fail, but the size of the data have been restricted, about 15M.
The essence of the place where two things:

  • Make use of the data size limitations due to the different video resolution of the problem will lead to a huge gap with a packet, memory problems and in fact we actually care about.

  • Pause 10ms, instead of waiting for the condition to suspend unlimited lock signal. From the design will be more simple, and avoid frequent wait + signal. This question needs careful thought, but felt intuitively such operations is very good.

frameQueue;

Using a simple array data storage, this data can be seen as a ring, and also a period in which the data, not another piece of data. rindex data represents the beginning of the index, index data is read, i.e., read index, windex index represents the beginning of the null data is index data is written, i.e. write index.

Is constantly reused in the cycle, and then indicate the current size of data size, max_size represents the maximum number of slots, if the size of the write time is full, it will block waiting; when the read size is empty, will block waiting.

Data A strange thing is rindex_shown, when the reading is not read rindex position, but rindex + rindex_shown, need to combine the use of the latter look at this role. Look at the back.

There are no serial understand what it meant
Conclusion: buffer design and I are completely different, but they all use the concept of reuse, and nodes are packing boxes, packing data in nodes inside. Relatively good performance, but I'm better design, frame and packet using a unified design, but also includes sorting.
Memory Management

packet management

Av_read_frame obtained from the initial value, this time reference number 1, packet is used to pick up a temporary variable, i.e., stack memory. When the queue is then added, pkt1-> pkt = * pkt; values ​​using the packet stored in the copy mode, so that data in buffer and the outer temporary variables separated.

packet_queue_get_or_buffering taken out of the packet, use the same value of replication.
Finally, the packet associated with the use av_packet_unref buf freed, while the packet can continue to use temporary variables.

One thing to note is: avcodec_send_packet return EAGAIN represents the current can not accept the new packet, there is no frame taken out, so have:

d->packet_pending = 1;
av_packet_move_ref(&d->pkt, &pkt);

This packet saved to d-> pkt, in the next cycle, taking the first Frame, then back from packet, then the above operation:

if (d->packet_pending) {
    av_packet_move_ref(&pkt, &d->pkt);
    d->packet_pending = 0;
}

It may be present when the B-frame is so, because B-frames need to rely on the latter frame, so it will not decoded until after the incoming frame back, there will be a plurality of frames to be read. Then decoder should not accept a new packet. But ijkplayer code here does not seem to be the case, because the reading frame is not one at a time, but a one-time read the newspaper EAGAIN error is unknown. To be investigated.

Another, av_packet_move_ref this function is the only complete copy, the full value of the source moved to destination, and reset the source off. In fact, moved locations, buf reference number does not change.

Video frame memory management

In ffplay_video_thread Lane, frame memory is used get_video_frame read from the decoder to the frame. At this frame of reference is a process error, use av_frame_unref buf memory release frame, but the frame itself can continue to use. Without error, also called av_frame_unref, thus ensuring that each frame will unref read, this is initialized with the corresponding unref. Using a reference index for memory management, an important principle is one to one.

Because time here just to get the frame, and then into the buffer, not to use, if buf is released, and then by the time play, the data is lost, so is how to deal with it? To buffer queue_picture years, then SDL_VoutFillFrameYUVOverlay, this function will be to the top, according to the decoder different to do different treatment to ijksdl_vout_overlay_ffmpeg.c of func_fill_frame example.
There are two possibilities:

One is the overlay and frame shared memory directly on the display frame memory, the format is YUV420p is so because OpenGL program this color space can be displayed directly. This will only need to add a reference to the frame, guaranteed not to be released out just fine. The key is the phrase: av_frame_ref (opaque-> linked_frame, frame);

The other is not shared, because to turn format, and the other to build a frame, that is where opaque-> managed_frame, then turn format. Data to a new place, the original frame is put away. Not ref operation, it naturally will be released.

Audio frame processing

In audio_thread, constantly get into new frame by decoder_decode_frame. And the same video, frame memory is also here, after reading frame decoded, referred to as 1. Audio format conversion on the stage play, so here is simply stored in the frame: av_frame_move_ref (af-> frame, frame) ;. Make a copy, read the frame buffer to carry it. Take the time data in the frame buffer, frame_queue_next av_frame_unref the frame contains a release. This video is the same. One problem is that the audio data is read when the top players, frame must be alive, because if the audio format without conversion, directly read the data in the frame. So too is the need to fill the player at the end of the data before they can release the frame. unref in frame_queue_next, and this function is the next reading frame when it occurs, the next frame is read after reading the current data, so after reading the data, will release the frame, so that the right .

//数据读完才会去拉取下一个frame
if (is->audio_buf_index >= is->audio_buf_size) {
    audio_size = audio_decode_frame(ffp);

Way audio and video playback

  • Audio playback using AudioQueue:

  • Construction of AudioQueue: AudioQueueNewOutput

  • Start AudioQueueStart, pause AudioQueuePause, ending AudioQueueStop

  • In the callback function IJKSDLAudioQueueOuptutCallback, the underlying function call to fill to fill the buffer AudioQueue.

  • Use AudioQueueEnqueueBuffer to assemble finished AudioQueue Buffer into the team into the play.

These are standard operating above AudioQueue, especially when the building is AudioStreamBasicDescription, that is, to specify the format audio playback. The format is determined by the format of the audio source, look at IJKSDLGetAudioStreamBasicDescriptionFromSpec, in addition to fixed format pcm, the other is copied from the bottom to the format. So there is a lot of freedom, audio source only needs to be decoded into pcm it.
The format is the bottom layer in audio_open decision logic is:

The source file format to build a desired wanted_spec, and supplies the upper layer to the desired format, and finally get the actual format of the upper layer returned as a result. A similar operation of communication, such thinking is worth learning

If the upload does not accept this format, an error is returned, the underlying modify the number of channel, sample rate and then continue to communicate.
However, the format is fixed as Sample S16, i.e., signed integer 16, signed int type, bit depth is 16-bit format. Refers to memory size per bit depth stored samples, 16-bit, coupled with symbols, the range [-2 ^ 15, 215-1], 215 to 32,768, sufficient variability.

Because all pcm, audio is not compressed, so the only decisive factor: sample rate, number of channels and sample format. Sample format fixed s16, and the upper communication is to determine the sampling rate and number of channels. Here is a good example of a layered architecture, the underlying general, differ depending on the platform top.

Video playback:

Play all use OpenGL ES, use IJKSDLGLView, rewritten layerClass, the layer type is changed to CAEAGLLayer can display the contents of the OpenGL ES rendering. All types of screens use this display, there is a difference places Render abstract to this role, and related methods are:

  • setupRenderer build a render

  • IJK_GLES2_Renderer_renderOverlay 绘制 overlay.

render the building include:

  • Fragmnt shader and using different common vertex shader program Construction

  • Matrix provides mvp

  • Set the texture and vertex coordinate data

render rendering include:

  • func_uploadTexture targeted to different render, perform different textures upload

  • Graphing using glDrawArrays (GL_TRIANGLE_STRIP, 0, 4) ;, GL_TRIANGLE_STRIP primitive used instead GL_TRIANGLE, vertices can be saved.

Texture is provided a method key, except that the arrangement of the color space and elements:

rgb offers three types: 565,888 and 8888. rgb types of elements are mixed together, which is only one layer (plane), 565 refers to a rgb3 elements each number of bits occupied by the same token the alpha 888,8888 comprising another element. So each pixel 565 2 bytes, 3 bytes 888, 8888 4 bytes.

glTexImage2D(GL_TEXTURE_2D,
                    0,
                    GL_RGBA,
                    widths[plane],
                    heights[plane],
                    0,
                    GL_RGBA,
                    GL_UNSIGNED_BYTE,
                    pixels[plane]);

When the difference between building textures in the format with the type parameter.

  • yuv420p, which refers to the most common y, u, v3 all elements apart, 3 layers and the number ratio of 4: 1: 1, so the texture uv size half height and width are y texture. Because each component and a respective texture, each texture is a single channel, format used is GL_LUMINANCE

  • , This ratio is also yuv420sp yuv of 4: 1: 1, except that the uv not separate the two layers, but a mixture in the same layer, the layering is uuuuvvvv, it is mixed uvuvuvuv. So constructed two textures, texture constant y, uv texture GL_RG_EXT using dual channel format, the size of y is 1/4 (1/2 aspect both). The value of the fragment shader in this time will be different:
//3层的
yuv.y = (texture2D(us2_SamplerY, vv2_Texcoord).r - 0.5);
       yuv.z = (texture2D(us2_SamplerZ, vv2_Texcoord).r - 0.5);
//双层的
yuv.yz = (texture2D(us2_SamplerY,  vv2_Texcoord).rg - vec2(0.5, 0.5));

uv in the same texture, texture2D rg two components of the direct access.

  • yuv444p not really understand, look at each pixel fragment shader looks like there are two versions of yuv, then made a interpolation.
    Finally yuv420p_vtb, this is a hard solution VideoToolBox out data, because the data stored in the CVPixelBuffer, the direct use of texture so iOS system construction method.

ijkplayer OpenGL ES in the version 2.0, version 3.0 if used, may be used two-channel GL_LUMINANCE_ALPHA.

Audio and video synchronization

First look at the audio, the audio did not do congestion control, the upper layer of the player data will need to be filled, I do not see less than the filling operation. So should be the default audio clock control call the shots, did not do so audio processing.

1. The time control of video display

In video_refresh control video, the playback function is video_display2, enter here represents a time to the broadcast, which is a detection point.
There are a few parameters need to know:

  • is-> frame_timer, this time on behalf of a playback time

  • This delay represents the time difference between the frame to the next
if (isnan(is->frame_timer) || time < is->frame_timer){
   is->frame_timer = time;
}

On a playing time after the current time, a data error, adjusted to the current period of time

if (time < is->frame_timer + delay) {
   *remaining_time = FFMIN(is->frame_timer + delay - time, *remaining_time);
   goto display;
}

is-> frame_timer + delay, says the current frame playback time, and this time is later than the current time, it means not to play time.

This has a pit: goto display is not to play, because the display block where there is a judgment as to where there is-> force_refresh. The default value is false, so jump directly to the display, the actual meaning is what is not dry, the end of this judgment. Conversely, if the play time earlier than the current time, it would have to immediately broadcast. Therefore, an update of the play time: is-> frame_timer + = delay; and up to the back, there is-> force_refresh = 1 ;, is really playing time.

From the above it can be seen two basic processes: the
beginning of the current frame playback time not to, goto display waiting for the next cycle, cycle times, after the period of time does not move, finally time to play, to play the current frame, frame_timer update for the current time frame. Then repeat the process, to play the next frame. Then there is the question: Why frame_timer update is to add delay, rather than directly equal to the current time?

If directly equal to the current time, because time> = frame_timer + delay, then frame_timer is relatively greater number, then calculate the next time frame, that is, frame_timer + delay time, that would be bigger. And each frame will be the case, the last bit of each frame will be large, there may be a relatively large difference in the whole.

if (delay > 0 && time - is->frame_timer > AV_SYNC_THRESHOLD_MAX){
    is->frame_timer = time;
}

When frame_timer backward, a direct reference to the current time, you can directly amend the state, will play on the right track after.

2. The clock synchronization and clock correction time

Synchronous clock concept: audio or video if the content correctly complete player, a content and a time is one to one, the current audio or video player to which position, there is a time to express it, this time It is the synchronization clock time. So the audio audio playback clock time represents the location to which the video player to which location Chung said.

Because the audio and video are separately represented, it may appear inconsistent with the progress of audio and video in sync on the clock on the performance of the different values ​​of the two synchronous clock, if both make unified, that is audio and video synchronization issues.

Because of synchronous clock concept, the synchronization of audio and video content can be reduced to a more accurate: the same audio and video clock clock time.

Then there will be a synchronous clock as a master clock, which is the other synchronization clock to adjust their time according to the master clock. Full to tune fast, fast to slow tune.

compute_target_delay where the logic is so, diff = get_clock (& is-> vidclk) - get_master_clock (is); this is the gap between video clock and the main clock:
// video backward exceeds a critical value, the next frame to shorten the time
if (diff <= -sync_threshold) delay = FFMAX (0, delay + diff); // advance the video, and exceeds the threshold, the next frame time extended IF the else (the diff> = Delay sync_threshold &&> AV_SYNC_FRAMEDUP_THRESHOLD)
Delay = Delay + the diff;
the else IF ( the diff> = sync_threshold)
Delay = Delay * 2;

As for why not all delay + diff, that is, why are the first three kinds 1 case, my guess is:
delay direct plus diff, then the next frame directly corrected difference kinds of video and master clock, but it is possible this difference has been relatively large, one-step direct effect of the correction is due to: a clear picture pause, then the sound continue to be played until then sync the video back to normal. And if 2 * delay of the way, every time the correction delay, more stepwise difference correction, the change may be more smoothly through. The effect is that the picture and sound are normal, then gradually catch up with the sound of the voice, the last synchronization. As for why the second case choose one-step correction, Case 3 Select gradually corrected, this is hard to say. Because AV_SYNC_FRAMEDUP_THRESHOLD value of 0.15, corresponding to the frame rate is about 7, to this extent, video are basically slide, I guess then gradual correction did not make sense.

3. Synchronize clock time taken to achieve

Look achieve synchronization clock time: get_clock acquisition time, set_clock_at update time.
Analytical about: return c-> pts_drift + time - (time - c-> last_updated) * (1.0 - c-> speed) ;, so why write?

The last time the display is updated synchronous clock, call set_clock_at, the last time c-> last_updated, then:

c->pts_drift + time = (c->pts - c->last_updated)+time;

Assuming that since the last time difference time_diff = time - c-> last_updated, expression can be turned into a whole:

c-> pts + time_diff + (c- > speed - 1) * time_diff, the combined two changes:
the C-> the C-PTS +> Speed * time_diff.

We have requested that media content position at the current time, the last position c-> pts, while the middle time_diff so much time has passed, the media content is past time: Playback speed x real time, that is, c-> speed * time_diff. Example: reality past 10s, if you double-speed playback, that video is over 20s. So this expression is very clear.

In set_clock_speed in the same time called set_clock, which is to ensure that the time since the last update, the speed is not changed, otherwise the calculation does not make sense. With this in almost the same, there is one point handle synchronous clock in seek time, to seek problem we'll see.

seek treatment

seek progress bar is to adjust to a new place to start sowing, this operation will disrupt the original data stream, some of the players to re-establish order. Issues to be addressed include:

  • Buffer data release, but also to re-release all clean and place head

  • Play time display

  • Maintenance "Loading" state, and this affects the user interface display problems

  • Excluding the problem of error frames

Process

Outside call to seek ijkmp_seek_to_l, then sends a message ffp_notify_msg2 (mp-> ffplayer, FFP_REQ_SEEK, (int) msec) ;, stream_seek call to capture the message, and then set to 1 seek_req recorded seek to target seek_pos. In reading function read_thread years, when is-> seek_req is true, entered seek treatment, several core processing:

  • ffp_toggle_buffering off decoding, packet buffer stationary

  • Call avformat_seek_file carried seek

  • After successfully with packet_queue_flush empty the buffer, and inserted into the flush_pkt, when a labeled

  • The current record of the serial

The point here is worth learning:

  • When I seek treatment, and the other is to open a thread calls the ffmpeg seek methods, and here is a direct read thread, so do not wait for the end of the reading process

  • After successfully seek re-flush the buffer

because

if (pkt == &flush_pkt)
        q->serial++;

So the significance of serial manifested, each seek, serial + 1, that is, serial as a marker, the same representatives are in the same time seek.

To decoder_decode_frame in:

  • Because the amendment is to seek read thread, and the thread is not a decoding here, it may seek to modify the code in any location appear here.

  • if (d-> queue-> serial == d-> pkt_serial) that determines which code blocks 1, while (d-> queue-> serial! = d-> pkt_serial) the cyclic code blocks 2, if (pkt .data == flush_pkt.data) determines the true code blocks 3, false code blocks 4.

  • If the modification occurs before the seek code block 2, then it will enter block 2, as would have been read packet_queue_get_or_buffering flush_pkt, so it will block 3 must feed, the decoder performs avcodec_flush_buffers empty buffer.

  • If after the seek code block 2, it will only feed block 4, but recirculated back into the block will be 2, block 3, then this would have avcodec_flush_buffers Qingdiao packet.

  • Comprehensive above two cases, only the packet will seek to get after decoding, rocks!

Powerful in this period:

  • seek changes at any time, it will not go wrong

  • seek treatment in the decoding thread is doing, eliminating the need for handling communication between the threads of locks and other conditions, more simple and stable. If the entire data stream is a river, this is like a river that flush_pkt of a buoy, the buoy encountered, the back flow of color has changed. One such upgrade their own meaning, and not by the aid of a third party to do the upgrade. For streamlined program logic, better do so.

4. Play at

Video video_refresh in:

   if (vp->serial != is->videoq.serial) {
       frame_queue_next(&is->pictq);
       goto retry;
   }

Audio audio_decode_frame in:

    do {
       if (!(af = frame_queue_peek_readable(&is->sampq)))
           return -1;
       frame_queue_next(&is->sampq);
    } while (af->serial != is->audioq.serial);

According to the old serial data are skipped.
So the whole look down, things seek the most powerful thing in the system is used to mark serial data, so you can know what is very clear on the data, which is new data. Then processing is done in the original thread, rather than amend the relevant data in another thread, eliminating the thread control, operational problems thread communication, stability is also improved.

Playing time acquisition

When looking ijkmp_get_current_position, seek, seek the return time to see ffp_get_current_position_l during playback, the core is the content get_master_clock time minus the start time is-> ic-> start_time.
seek time, content location undergone a huge jump, so how to maintain proper synchronization clock?

Audio and video data in the pts are frame-> pts * av_q2d (tb) , the content of which is time, but turned into real time units.
Then is-> audio_clock = af-> pts + (double) af-> frame-> nb_samples / af-> frame-> sample_rate ;, it is-> audio_clock is the latest one of the audio content data played to time
in method of filling audio, set the audio clock code is:

set_clock_at(&is->audclk, 
is->audio_clock - (double)(is->audio_write_buf_size) / is->audio_tgt.bytes_per_sec - SDL_AoutGetLatencySeconds(ffp->aout), 
is->audio_clock_serial, 
ffp->audio_callback_time / 1000000.0);

Because is-> audio_write_buf_size = is-> audio_buf_size - is-> audio_buf_index ;, so audio_write_buf_size current frame is not read the rest of the size, the (double) (is-> audio_write_buf_size) / is-> audio_tgt.bytes_per_sec to identify the remaining the data is finished playing time.

SDL_AoutGetLatencySeconds (ffp-> aout) is time data buffer upper layer of AudioQueue iOS is concerned, there are more AudioBuffer wait to play, this is the time they finished playing takes time.

The timeline is this:

[Frame End Point] [buf remaining time] [buf upper time] [end playback point immediately]
Therefore, the second parameter is the time: the current contents of the frame at the end of time - time remaining buf - an upper layer of the player buf time, that is, just after the content playing time.

ffp-> audio_callback_time is the time when filling method call, there is an assumption here is that the top players broadcast over a buffer, immediately call the fill function, so ffp-> audio_callback_time is the reality of the time just after the play.

The second parameter such significance and fourth parameters to match on.
Back to seek, after the seek is complete, there will be the first new player into the frame, it will synchronize pts bell, which is the content of the media time to adjust to the position after the seek, then there is a problem: mp-> seek_req this identity reset back to 0 point in time must be greater than the first frame of a new set_clock_at to be late, otherwise the clock synchronization time has not transferred to the new, seek identity is over, and then to calculate the current play according to the synchronization clock time, on the wrong (on the interface should be before the progress bar flashback seek).

In fact there is no such, because the synchronous clock get_clock, there is a
IF (! * C-> queue_serial = C-> Serial)
return NAN;

The serial operation is really God, handy!
The audio and video clocks serial clock is updated when you play, that is updated to a later seek serial when playing the first frame of the new data, and c-> queue_serial is a pointer: init_clock (& is-> vidclk, & is-> videoq.serial) ;, and the serial packetQueue shared memory. So that is the first frame after playing the new data, c-> queue_serial! = C-> serial this was not true. That it is, even if mp-> seek_req reset back to 0, or seek to obtain the value of the target value is not calculated in accordance pts, so it will not flash back.

Resources released the stop

shutdown method from the core to the release method stream_close. Flow of operation is as follows:

1, stopped to read the thread:

packet_queue_abort the packetQueue audio and video stop reading
abort_request identified as 1, and then wait for the end of the thread SDL_WaitThread

2, turning off the decoder section stream_component_close:

  • decoder_abort stopped packetQueue, blocking framequeue open, waiting for the decoding end of the thread, and then empty the packetQueue.

  • decoder_destroy destruction decoder

  • Reset empty data stream

3, turning off the display thread: There determine the data flow in the thread display, video is-> video_st, audio is-> audio_st, in the previous step in the stream is reset to empty, display thread will end. Here, too, the use of SDL_WaitThread wait for the thread to finish.

4, empty the buffer data: packet_queue_destroy destruction packetQueue, frame_queue_destory destroyed frameQueue.

I wrote contrast, need to modify the place:

  • End threads pthread_join way, instead of a lock

  • Decoders, buffers, etc. completely destroyed, then rebuilt the next play, do not reuse

  • Audio stopped by turning off the top players, the bottom is passive, and not cycle thread; stop the video only need to wait for the end of the thread.

The core is the first point, use pthread_join waiting thread end.

Network hard to deal with

It will automatically pause and wait. Internal controls can play or pause.
VTB when using unified architecture

  • frame buffer using a custom data structures Frame, you can put a variety of styles unified by him.

  • Frame has a lower data interfacing object when Vout upper boundary here. Then to top that overlay, so the question is how the conversion from the frame to overlay, and how to display overlay. These two operations provided by Vout create_overlay and display_overlay accomplished.

  • After using VTB, there is pixelBuffer data obtained by decoding the years, and the data decoding ffmpeg in AVFrame, the transformation of this difference in function is created in a different overlay.

to sum up:

  • For the connection of two modules, in order to unify, both sides need to encapsulate unified model;

  • In a unified model, but also has different operating segments;

  • Input data from A to B, then the operation is provided by the segment B, B should be the recipient, it needs to know what kind of results.

  • Such as in the implementation process, to maintain the stability of the process; and the actual execution, in some places there are different, so they can adapt to a variety of unique needs.

Original author: FindCrt, the original link: https://www.jianshu.com/p/814f3a0ee997

Here Insert Picture Description
I welcome attention to the micro-channel public number "code of agricultural breakthrough," sharing Python, Java, big data, machine learning, artificial intelligence technology, agricultural technology to enhance the focus on code • • Breakthrough thinking workplace transition, farmers grow 200,000 + code first charge station, accompany you grow up with a dream.

Guess you like

Origin www.cnblogs.com/hejunlin/p/12450570.html