"Live video technology explain" series of six: delay optimization

Seven cattle cloud released a live video solutions for real-time streaming of live network LiveNet and complete cloud solution to the end of June, many developers and network solutions for the details and usage scenarios are very interested.

Seven cattle combined real-time streaming network LiveNet practice and live cloud solutions, we use eight articles, more systematic introduction to the current hot key technologies of all aspects of live video, live video to help entrepreneurs are more comprehensive, in-depth understanding of live video technology, better technology selection.

 

This series of articles following outline:

(A) opening

(B) Acquisition

(C) processing

(D) encoding and encapsulation

(E) transmitting plug flow and

(Vi) delay optimization

(Vii) the principles of modern players

(Viii) SDK Performance Test Model

In the last plug flow and transmission, on critical factor "Live first kilometer" We launched a detailed introduction. Benpian is "live video decryption technology" series of six: delay optimization.

We are under a lot of online and offline applications to share the experience of how to optimize the broadcast, explained in detail the various parts of the causes and the corresponding low latency and optimization principle Caton. In fact, live audio and video system is a complex engineering system, to achieve very low latency live, you need to master complex systems engineering and optimization of the various components are very familiar with. That which we then share a few simple and common tuning tips.

 

Coding optimization

 

1. Make sure Codec opened the lowest latency settings. Codec generally have low latency switch optimized for H.264 which is particularly effective. Many people may not know the H.264 decoder will normally constant video frame buffer prior to display, for QCIF resolution video size (176 × 144) caches generally 16, for the 720P video buffer 5 . For reading the first frame, it is a big delay. If you are not using the H.264 video compression to encode, use to ensure that no B-frames, its delay will have a greater impact because the video decoding B frames depend on the video frames before and after, will increase the delay.

 

2. The encoder usually have caused delay code control, generally known delay or buffer size initialization the VBV video buffer verifier to it as the buffer between the encoder and the decoder bit stream, without affecting the video quality in under you can set it to be as little delay can be reduced.

 

3. If the optimization is only the first of delay, you can insert more keyframes in the video frames, so that the client receives after the video stream can be decoded as soon as possible. However, if the need to optimize the accumulated delay during transmission, using as few key frame is an I-frame (GOP becomes large), to ensure that in the case where the same video quality, the more I frames, the greater the rate required for transmission the more network bandwidth, which means the cumulative delay likely to be. This optimization results may not be obvious in the second-stage delay in the system, but even more in the 100 ms latency of the system will be very obvious. At the same time, make use of ACC-LC Codec encoded audio, HE-ACC or HE-ACC 2 while high encoding efficiency, the encoding takes longer, and the greater volume of the audio transmission delay due to the transmission of video streams to He said that the impact is smaller.

 

4. Do not use the video MJPEG video compression format, at least without the use of B-frames MPEG4 video compression format (Simple profile), and even better to use H.264 baseline profile (X264 there is a "-tune zerolatency" optimization switches) . Such a simple optimization can reduce the delay because it enables full frame rate of video encoding at a lower bit rate.

 

5. If using FFmpeg, reducing the value of "-probesize" and "-analyze duration" parameter, these two values ​​for video frame information for monitoring and monitoring for the duration of these two values ​​greater influence on the coding delay the larger, in the live scene for the video stream is no need to set parameters analyzeduration even. 

 

6. The fixed rate CBR encoding can to some extent to eliminate the impact of network jitter, can be used if variable bit rate VBR coding can save some unnecessary network bandwidth and reduces certain delay. Therefore recommended to make use VBR encode.

Transfer Protocol Optimization

1. Try to use between a service node and the end node for transmission instead RTMP HLS HTTP-based protocol, which can reduce the overall transmission delay. The main use for the end user HLS playback situation.

 

2. If the end user to play using RTMP, close to plug flow as possible in the end nodes receive the stream transcoded, so that the transmitted video stream is smaller than the original video stream.

 

3. If necessary, replace the TCP protocol using a custom UDP protocol, eliminating the net loss under weak link retransmission can reduce delay. Its main drawback is that the transmission and distribution of video streams customized protocol based on UDP protocol is not general enough, CDN vendors support the standard transmission protocol. Another disadvantage is that the video packet loss may occur or fuzzy (missing key frame decoded reference), which requires good agreement custom side UDP packet loss on the basis of control. 

 

Transmission Network Optimization

1. We have introduced seven live cattle cloud of real-time streaming network , which is a new type of network transmission network node self-organized, both for domestic transport in multi-operator network optimization conditions, but also for many overseas live demand.

 

2. Cache current GOP, with the player first open end video optimization time on the server node.

 

Frame rate and bit rate when the second stage 3. The server-side real-time recording every aspect of each video stream flows, real-time monitoring of bit rate and frame rate fluctuation.

 

4. Client (plug flow and Play) to get the current best node (5 seconds) by querying the server quasi-real-time, near real-time off the assembly line of the current node and line failures.

 

Plug flow optimize playback

1. Investigation of the transmitting end system comes network buffer size, the system may cache data before transmitting data, the tuning parameter is also need to find a balance.

 

2. For the first cache control playback start and end of the video delay also affected, if only the first of delay optimization, can when data arrives at the decoding buffer 0 immediately. But if in order to eliminate the effect caused by the network jitter, set a certain cache is also necessary in a weak network environment, it is necessary to open delay in the broadcast of the first to find a balance and stability on optimization, adjust and optimize the buffer size value.

 

3. Play side dynamic buffer strategy, which is an improved version of the above players end cache control. If you do choose only between 0 and fixed-size buffer cache to find a balance, the cache will eventually choose a fixed size, which is not fair for one hundred million mobile Internet end-users, their different network conditions determine the fixed-size buffer is not entirely appropriate. Therefore, we can consider a "dynamic buffer strategy", with very little or even 0 caching strategy when the player is turned on by time-consuming to download the first piece of video to determine the size of the cache the next time slice, while playing during the current real-time monitoring network, real-time adjustments during playback buffer size. This allows you to do the first open time low, but also be able to try to eliminate the impact of network jitter caused.

 

4. VBR playing strategy. In addition to dynamically adjust policies outside the buffer size, you can also use real-time monitoring of network information to dynamically adjust the playback bit rate, reduced rate for playback in case of insufficient network bandwidth and reduce latency.

 

Above, it is part of our skills in low-latency optimization. In fact, we optimize the latency time is not only concerned with "low latency", but does not affect other conditions to ensure as far as possible low latency without the user's experience, so it's more content related to a wide range of topics. While optimizing all aspects of live video is also included here only share a part of which through our practice. With the accumulation of practice, we are going to be online and offline to share more about optimization techniques and even live video-on-demand.

Guess you like

Origin blog.csdn.net/ai2000ai/article/details/94721722