WebRTC protocol streaming

Cloud Live provides a streaming SDK TXLivePusher for web streaming, which is responsible for pushing the audio and video images captured by the browser to the live broadcast server through the WebRTC protocol. Currently, it supports camera collection, microphone collection, screen sharing collection, local media file collection and user-defined collection and other collection methods. It supports local mixed stream processing of the collected content and then pushes it to the back-end server.

Notice

Using the WebRTC protocol to push streams, each push domain name has a default limit of 1,000 simultaneous push streams. If you need to exceed this limit, you can contact us to apply by submitting a work order.

The benefits of this article, free C++ audio and video learning materials package, technical video/code, including (audio and video development, interview questions, FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, codec, push-pull stream, srs)↓↓↓ ↓↓↓See below↓↓Click at the bottom of the article to get it for free↓↓

basic knowledge

Before docking, you need to know the following basic knowledge:

Assembly of streaming address

When using the Tencent Cloud live streaming service, the streaming address needs to meet the format of the Tencent Cloud standard live streaming streaming URL, as shown below, which consists of four parts:

The authentication Key part is not necessary, if you need to prevent hotlinking, please enable push authentication

browser support

Web streaming is implemented based on WebRTC and depends on the support of operating systems and browsers for WebRTC. Currently, the latest versions of Chrome, Edge, Firefox, and Safari browsers all support web streaming.

Notice:

Some functions of the browser to capture audio and video images are restricted on the mobile terminal H5. For example, the mobile terminal browser does not support screen sharing, and iOS 14.3 and above versions only support the acquisition of user camera equipment.

Docking strategy

Step 1: Page Preparation

Introduce initialization scripts into pages that require live streaming.

<script src="https://video.sdk.qcloudecdn.com/web/TXLivePusher-2.1.0.min.js" charset="utf-8"></script>

illustrate

The script needs to be introduced in the body part of the HTML, and an error will be reported if it is introduced in the head part.

Step 2: Placing Containers in HTML

Add a player container to the page where local audio and video images need to be displayed, that is, place a div and name it, such as local_video, and the local video images will be rendered in the container. For container size control, you can use the css style of div to control, the sample code is as follows:

<div id="local_video" style="width:100%;height:500px;display:flex;align-items:center;justify-content:center;"></div>

Step 3: live streaming ,

Generate streaming SDK instance: generate an SDK instance through the global object TXLivePusher, and follow-up operations are completed through the instance.

const livePusher = new TXLivePusher();

Specify the local video player container: Specify the local video player container div, and the audio and video images captured by the browser will be rendered into this div.

livePusher.setRenderView('local_video');

illustrate

The video element generated by calling setRenderView has sound by default. If the sound collected from the microphone is played, echo phenomenon may occur. The video element can be muted to avoid the echo phenomenon.

livePusher.videoView.muted = true;

Set audio and video capture quality: Before capturing audio and video streams, set the audio and video quality first. If the preset quality parameters do not meet the requirements, you can customize the settings separately.

// 设置视频质量
livePusher.setVideoQuality('720p');
// 设置音频质量
livePusher.setAudioQuality('standard');
// 自定义设置帧率
livePusher.setProperty('setVideoFPS', 25);

Start capturing streams: currently supports capturing camera devices, microphone devices, screen sharing, local media files, and custom streams. When the audio and video streams are captured successfully, the locally captured audio and video images will start playing in the player container.

// 打开摄像头
livePusher.startCamera();
// 打开麦克风
livePusher.startMicrophone();

Start live stream push: Pass in the cloud live push stream address to start push stream.

livePusher.startPush('webrtc://domain/AppName/StreamName?txSecret=xxx&txTime=xxx');

illustrate

Before pushing the stream, make sure that the audio and video stream has been collected, otherwise the call to the push stream interface will fail. If you want to automatically push the stream after collecting the audio and video stream, you can wait for the video stream and audio stream to be successfully collected before pushing the stream.

var hasVideo = false;
var hasAudio = false;
var isPush = false;
livePusher.setObserver({
 onCaptureFirstAudioFrame: function() {
   hasAudio = true;
   if (hasVideo && !isPush) {
     isPush = true;
     livePusher.startPush('webrtc://domain/AppName/StreamName?txSecret=xxx&txTime=xxx');
   }
 },
 onCaptureFirstVideoFrame: function() {
   hasVideo = true;
   if (hasAudio && !isPush) {
     isPush = true;
     livePusher.startPush('webrtc://domain/AppName/StreamName?txSecret=xxx&txTime=xxx');
   }
 }
});

Stop live streaming:

livePusher.stopPush();

Stop capturing audio and video streams:

// 关闭摄像头
livePusher.stopCamera();
// 关闭麦克风
livePusher.stopMicrophone();

Advanced strategy

compatibility

The SDK provides a static method to detect the browser's compatibility with WebRTC.

TXLivePusher.checkSupport().then(function(data) {  
  // 是否支持WebRTC  
  if (data.isWebRTCSupported) {    
    console.log('WebRTC Support');  
  } else {    
    console.log('WebRTC Not Support');  
  }  
  // 是否支持H264编码  
  if (data.isH264EncodeSupported) {    
    console.log('H264 Encode Support');  
  } else {    
    console.log('H264 Encode Not Support');  
  }
});

callback event notification

The SDK currently provides callback event notifications. You can set callback events to understand the internal status information of the SDK and WebRTC-related statistics.

livePusher.setObserver({
  // 推流警告信息
  onWarning: function(code, msg) {
    console.log(code, msg);
  },
  // 推流连接状态
  onPushStatusUpdate: function(status, msg) {
    console.log(status, msg);
  },
  // 推流统计数据
  onStatisticsUpdate: function(data) {
    console.log('video fps is ' + data.video.framesPerSecond);
  }
});

device management

The SDK provides a device management instance TXDeviceManager to help users obtain the device list, switch devices and other operations.

var deviceManager = livePusher.getDeviceManager();
// 获取设备列表
deviceManager.getDevicesList().then(function(data) {
  data.forEach(function(device) {
      console.log(device.deviceId, device.deviceName);  
  });
});
// 切换摄像头设备
deviceManager.switchCamera('camera_device_id');

The benefits of this article, free C++ audio and video learning materials package, technical video/code, including (audio and video development, interview questions, FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, codec, push-pull stream, srs)↓↓↓ ↓↓↓See below↓↓Click at the bottom of the article to get it for free↓↓

Guess you like

Origin blog.csdn.net/m0_60259116/article/details/132513099