[Audio and video processing] The working principle of live broadcast, what exactly are live CDN, push and pull streams, and streaming media services

Hi everyone, and welcome to the Stop Refactoring channel.

In this issue we discuss live streaming technology .

We will introduce the working principle of live streaming, the role of streaming media services, push/pull streaming, live CDN, etc.

What needs to be specially explained here is that live broadcast refers to: 1-to-many live broadcast, the kind of ordinary live broadcast platform, as for video conference, it is another scene.

We discuss in this order:

1. Working principle of live streaming 

2. Live source data acquisition

3. Live transcoding

4. Live stream output and live CDN 

How Live Streaming Works

In general, live streaming works the same way transcoding of video files works .

The complete working principle of video transcoding introduced in the working principle of video transcoding in previous issues is the streaming process of decapsulation->decoding->processing->encoding->encapsulation.

The biggest difference between live streaming and video file transcoding is the encapsulation format . Live streaming requires special processing at the beginning and end of live broadcast source reception and live broadcast output.

 

In general, live broadcast needs to receive video data from the live broadcast source from the network, and also needs to output the processed video data to the network .

Therefore, in the live broadcast system, in addition to the video transcoding program, two streaming media services are also required.

One is used to obtain live source data , and the transcoding program pulls video streams from this streaming service for processing.

The other is used for users to pull video streams to watch , and the transcoding program will continuously push the processed video data to this streaming service.

The specific software of the streaming media service varies according to different video streaming protocols . Of course, if the live streaming push protocol and the live viewing protocol are the same, only one streaming media service can be used.

By the way, pulling and pushing video stream data is also commonly heard as pulling and pushing streams. You may find that a streaming service can be pushed or pulled, and even the addresses for pulling and pushing are exactly the same.

In essence, the streaming media service is a transfer station for video streaming data . The streaming media service will store a part of the data of the video streaming in the memory in real time. As time goes by, the data will be cyclically overwritten.

Therefore, in general scenarios, the working principle of live broadcast can be simplified as, live stream data acquisition -> live transcoding -> live stream data output .

 

Live source data acquisition

Next, we will discuss the acquisition of live source data in detail . Live source data is generally transmitted through the network, so streaming media services are generally required as a transfer station for video data. But in fact, due to different specific protocols and application scenarios, there will be some differences .

For example, in the scenario of live broadcast on the live broadcast platform , that is, in general scenarios, the anchor needs to push the video stream to the streaming media service of the live broadcast platform . The protocol is generally RTMP, and the streaming service software can be SRS, Nginx with rtmp-server plug-in, etc.

 

If it is a rebroadcasting scenario , the streaming media service of its own system is not required to participate in the reception . The video transcoding software can directly pull the video stream of the other party . In fact, it directly pulls the video stream data of the streaming media service of the other party's system.

The protocol at this time may be RTMP, HLS, HTTP-FLV, RTSP, etc. But no matter what the protocol is, it is the other party's system that provides streaming media services, as long as its own video transcoding software supports this protocol .

 

Another scenario is file live broadcast, which is also called recording and broadcasting in business . In fact, the live broadcast source is a video file. In this scenario, streaming media services are not required to participate in receiving , and only video transcoding software is required to read slowly according to the playback time. Just take it.

 

Live transcoding

Live transcoding is also video transcoding, here you can add watermark, HD/smooth conversion, bit rate limit, live recording, etc. to the live stream.

In fact, the live transcoding program is not much different from the video file transcoding program . Some simple functions, such as high-definition/smooth transcoding, bit rate limitation, etc., can even be realized by setting some streaming media service software.

 

By the way, if you don't need any transcoding processing, and you don't have security permissions and other settings, you can directly pull the video from the streaming service to watch, and the push and watch URLs are generally the same.

The significance of developing a live broadcast transcoding program lies in some advanced functions , such as live countdown, automatic frame complementation when signal is interrupted, broadcast guide/carousel, picture-in-picture, etc. This part is the technical core of a live broadcast system. After all, streaming service software is generally bound to the protocol and ready-made.

Of course, the implementation of this part is very complicated, but it is the core content of audio and video processing. These contents will be discussed in detail in the follow-up content according to specific issues, and will not be discussed here.

Live streaming output and live CDN

After processing the video source, the video data needs to be output to the streaming media service , and the user can watch the video stream data by pulling it from the streaming media service .

The protocols for watching live streams are generally HTTP-FLV, HLS, etc. If the delay requirement is high, RTMP, WEBRTC, etc. are generally used. It should be noted that the RTMP protocol is currently not supported by mainstream browsers (Flash is disabled).

 

In addition, users can directly pull the video stream of the streaming service to watch, but this takes up a lot of bandwidth, and the unit of the video bit rate is the same as the unit of the bandwidth. If the bit rate of the video stream is 2Mbps, the server The bandwidth is 100Mbps, theoretically supports 50 people to watch.

If there are a lot of viewers, you need to use a live CDN . In fact, a live CDN can also be regarded as a streaming service, but it has many edge nodes to share the request pressure.

If the live CDN is used, the video transcoding software can directly output the video stream data to the live CDN , usually through the RTMP protocol to push to the CDN service.

Of course, there are also active CDN services, but it is not recommended to use active CDN services, because active pull CDN services may have multiple server back-to-sources at the same time, making it difficult to accurately estimate bandwidth needs.

In addition, live CDNs generally provide the function of automatically switching to viewing protocols , and generally provide viewing addresses of protocols such as RTMP, HTTP-FLV, WEBRTC, and HLS.

 

However, general live CDNs do not provide transcoding services, such as high-definition and smooth conversion, which generally require additional live transcoding cloud services or their own video transcoding software.

Because viewing protocol conversion is actually re-encapsulation in video transcoding, that is, re-encapsulation directly after decapsulation, which does not consume much performance. However, transcoding is very performance-intensive. Generally, cloud services for video transcoding are charged by time.

​Summary

This issue introduces the working principle of live broadcast, and many protocols are mentioned in it. Commonly used protocols for live broadcast will be expanded in detail in the next issue.

Guess you like

Origin blog.csdn.net/Daniel_Leung/article/details/130222511