The use of real-time audio and video in WeChat applets

The live-pusher component and live-player component provided by the Mini Program are mainly used in two scenarios: one is for live video broadcasting, and the other is for providing real-time audio and video related functions for your Mini Program. Both the live-pusher and live-player components have a mode called RTC. Through this mode, the real-time video call function can be realized in the applet. In this article, we introduce how to use the live-pusher component, live-player component and Tencent Cloud real-time audio and video products to develop a video call applet.

Real-time audio and video product architecture

Tencent Real-Time Communication (TRTC), based on Tencent's deep accumulation in network and audio and video technology over the years, focuses on two solutions, low-latency interactive live broadcast and multi-person audio and video, and provides services to Developers are open, and are committed to helping developers quickly build low-cost, low-latency, high-quality audio and video interactive solutions.
insert image description here

Product Features

  • Video call
    is two or more people video call, support 720P, 1080P high-definition picture quality. A single room can support up to 300 people online at the same time, and up to 50 people can turn on the camera at the same time.
  • Voice call
    is two or more people voice call, support 48kHz, support dual-channel. A single room can support up to 300 people online at the same time, and up to 50 people can turn on the microphone at the same time.
  • Video Interactive Live Streaming
    Supports live broadcasting between hosts and audiences. Support anchor cross-room (cross-live room) PK. It supports smooth microphone up and down, no need to wait for the switching process, and the anchor delay is less than 300ms. There is no limit to the number of people who can connect to the microphone in a single room, and a maximum of 50 people can connect to the microphone at the same time. In the low-latency live broadcast mode, it supports 100,000 viewers to play at the same time, and the playback delay is as low as 1000ms. In CDN bypass live mode, the number of viewers is unlimited.
  • Voice interactive live broadcast
    Supports voice-to-mic interaction between the anchor and the audience. Support anchor cross-room (cross-live room) PK. It supports smooth microphone up and down, no need to wait for the switching process, and the anchor delay is less than 300ms. There is no limit to the number of people who can connect to the microphone in a single room, and a maximum of 50 people can connect to the microphone at the same time. In the low-latency live broadcast mode, it supports 100,000 viewers to play at the same time, and the playback delay is as low as 1000ms. In CDN bypass live mode, the number of viewers is unlimited.

Instructions for using the real-time audio and video SDK

Before using the Tencent Real-Time Audio and Video SDK, you need to register a Tencent Cloud account and complete real-name authentication. After successful registration, you need to create a real-time audio and video application in the real-time audio and video product. Tencent Cloud will assign an SDKAppID and the corresponding UserSig key to your application. These two parameters will be used in subsequent encoding. The operation method of creating a real-time audio and video application is: log in to the real-time audio and video console, select [Application Management] -> [Create Application]:
insert image description here
After creating a real-time audio and video application successfully, you can see the SDKAppID in the application list interface:
insert image description here
click the above picture [Function Configuration] in the open page, you can see the UserSig key in the [Quick Start] of the opened page:
insert image description here
In addition, you need to download the trtc-wx.js file before encoding, which will be used for the next video call of the applet page, please download the trtc-wx.js file through the following url address:
https://web.sdk.qcloud.com/trtc/miniapp/download/trtc-wx.zip

API usage guidelines

The following figure is the call sequence diagram of the real-time audio and video applet API given in the Tencent real-time audio and video SDK documentation:
insert image description here

Step 1: Initialize TRTC

A class named TRTC is exported in the trtc-wx package. You need to instantiate this class in the page onLoad function, create a Pusher at the same time, and listen to the events thrown by TRTC.

onLoad(options) {
    
    
    //初始化 TRTC 实例
    this.TRTC = new TRTC(this)
//创建 Pusher
const pusher = this.TRTC.createPusher({
    
    beautyLevel: 9})
    //事件监听
    this.bindTRTCRoomEvent()
  },

Use the on(EventCode, handler, context) method of the TRTC class to listen to the events thrown by TRTC, for example:

//事件监听
  bindTRTCRoomEvent() {
    
    
    const TRTC_EVENT = this.TRTC.EVENT
    this.TRTC.on(TRTC_EVENT.ERROR, (event) => {
    
    
      console.log('* room ERROR', event)
    })
}

Step 2: Start streaming

First, you need to call the enterRoom method to enter the TRTC room, and then you can call the start() method to start streaming.

onLoad(options) {
    
    
    this.TRTC = new TRTC(this)
    const pusher = this.TRTC.createPusher({
    
    beautyLevel: 9})
    this.bindTRTCRoomEvent()
//进入房间
this.TRTC.enterRoom(this.data._rtcConfig)
//开始推流
this.TRTC.getPusherInstance().start() 
  }

After calling the enterRoom(params) method, if the room does not exist, the system will automatically create a new room, otherwise enter the room directly. The parameters of enterRoom(params) are as follows:

  • sdkAppID: the sdkAppID of your Tencent Cloud account
  • userID: the userID you entered the room with
  • userSig: userSig issued by your server
  • roomID: the number of the room you want to enter, if the room does not exist, the system will automatically create it for you
  • strRoomID: the string room number you want to enter, if you fill in this parameter, you will be given priority to enter the string room
  • scene: Required parameters, usage scenarios:
    rtc: real-time call, using high-quality lines, the number of people in the same room should not exceed 300.
    live: live broadcast mode, using mixed lines, supporting 100,000 people in a single room online (the number of people on the microphone at the same time should be controlled within 50)

Step 3: Process the remote stream

If you receive a new video stream from the remote end, you can start playing this video, and set the muteVideo status of this player to false. Here you need to pass in the id of this player. After the update is completed, it will return to you the updated playerList list , you only need to update the playerList of the page synchronously.

this.TRTC.on(TRTC_EVENT.REMOTE_VIDEO_ADD, (event) => {
    
    
      console.log('* room REMOTE_VIDEO_ADD',  event)
      const {
    
     player } = event.data
      this.setPlayerAttributesHandler(player, {
    
     muteVideo: false })
    })

If you receive the notification that the remote stream is reduced, you can unsubscribe from this channel and set muteAudio to true.

    this.TRTC.on(TRTC_EVENT.REMOTE_VIDEO_REMOVE, (event) => {
    
    
      console.log('* room REMOTE_VIDEO_REMOVE', event)
      const {
    
     player } = event.data
      this.setPlayerAttributesHandler(player, {
    
     muteVideo: true })
    })

Step 4: End the audio and video call

Call exitRoom() to exit the room. When exiting the room, the state of the state machine needs to be reset and synchronized to the page to prevent state confusion when entering the room next time.

_hangUp() {
    
    
    const result = this.TRTC.exitRoom()
    this.setData({
    
    
      pusher: result.pusher,
      playerList: result.playerList,
    })
    wx.navigateBack({
    
    delta: 1,})
  }

Step 5: Control whether to upload local audio and video streams

If you need to change the attributes of enable-mic and enable-camera on the live-pusher tag, you can change the push state managed in the state machine by calling the setPusherAttributes function, and update the updated state value returned to you to the page .

//上行音频流
this.setData({
    
    
  pusher: this.TRTC.setPusherAttributes({
    
    enableMic: true})
})
//上行视频流
this.setData({
    
    
  pusher: this.TRTC.setPusherAttributes({
    
    enableCamera: true})
})

Real-time audio and video code example

In the following code, we will implement a small program with video conversation function. In the entry page (home page) of the small program, we specify a room number for both users of the call, and generate a random user ID for the current user, and then jump to Go to the call page (room page) to make a video call. The following code is the code of the room page:

Sample code: room.wxml

<view class="template-1v1">
  <view wx:for="{
     
     {playerList}}" wx:key="streamID" wx:if="{
     
     {item.src && (item.hasVideo || item.hasAudio)}}" class="view-container player-container {
     
     {item.isVisible?'':'none'}}">
    <live-player
            class="player"
            id="{
     
     {item.streamID}}"
            data-userid="{
     
     {item.userID}}"
            data-streamid="{
     
     {item.streamID}}"
            data-streamtype="{
     
     {item.streamType}}"
            src= "{
     
     {item.src}}"
            mode= "RTC"
            autoplay= "{
     
     {item.autoplay}}"
            mute-audio= "{
     
     {item.muteAudio}}"
            mute-video= "{
     
     {item.muteVideo}}"
            orientation= "{
     
     {item.orientation}}"
            object-fit= "{
     
     {item.objectFit}}"
            background-mute= "{
     
     {item.enableBackgroundMute}}"
            min-cache= "{
     
     {item.minCache}}"
            max-cache= "{
     
     {item.maxCache}}"
            sound-mode= "{
     
     {item.soundMode}}"
            enable-recv-message= "{
     
     {item.enableRecvMessage}}"
            auto-pause-if-navigate= "{
     
     {item.autoPauseIfNavigate}}"
           auto-pause-if-open-native="{
     
     {item.autoPauseIfOpenNative}}"
            debug="{
     
     {debug}}"
            bindstatechange="_playerStateChange"
            bindfullscreenchange="_playerFullscreenChange"
            bindnetstatus="_playerNetStatus"
            bindaudiovolumenotify  ="_playerAudioVolumeNotify"/>
  </view>
  <view class="view-container pusher-container {
     
     {pusher.isVisible?'':'none'}} {
     
     {playerList.length===0? 'fullscreen':''}}">
    <live-pusher
            class="pusher"
            url="{
     
     {pusher.url}}"
            mode="{
     
     {pusher.mode}}"
            autopush="{
     
     {pusher.autopush}}"
            enable-camera="{
     
     {pusher.enableCamera}}"
            enable-mic="{
     
     {pusher.enableMic}}"
            muted="{
     
     {!pusher.enableMic}}"
            enable-agc="{
     
     {pusher.enableAgc}}"
            enable-ans="{
     
     {pusher.enableAns}}"
            enable-ear-monitor="{
     
     {pusher.enableEarMonitor}}"
            auto-focus="{
     
     {pusher.enableAutoFocus}}"
            zoom="{
     
     {pusher.enableZoom}}"
            min-bitrate="{
     
     {pusher.minBitrate}}"
            max-bitrate="{
     
     {pusher.maxBitrate}}"
            video-width="{
     
     {pusher.videoWidth}}"
            video-height="{
     
     {pusher.videoHeight}}"
            beauty="{
     
     {pusher.beautyLevel}}"
            whiteness="{
     
     {pusher.whitenessLevel}}"
            orientation="{
     
     {pusher.videoOrientation}}"
            aspect="{
     
     {pusher.videoAspect}}"
            device-position="{
     
     {pusher.frontCamera}}"
            remote-mirror="{
     
     {pusher.enableRemoteMirror}}"
            local-mirror="{
     
     {pusher.localMirror}}"
            background-mute="{
     
     {pusher.enableBackgroundMute}}"
            audio-quality="{
     
     {pusher.audioQuality}}"
            audio-volume-type="{
     
     {pusher.audioVolumeType}}"
            audio-reverb-type="{
     
     {pusher.audioReverbType}}"
            waiting-image="{
     
     {pusher.waitingImage}}"
            debug="{
     
     {debug}}"
            bindstatechange="_pusherStateChangeHandler"
            bindnetstatus="_pusherNetStatusHandler"
            binderror="_pusherErrorHandler"
            bindbgmstart="_pusherBGMStartHandler"
            bindbgmprogress="_pusherBGMProgressHandler"
            bindbgmcomplete="_pusherBGMCompleteHandler"
            bindaudiovolumenotify="_pusherAudioVolumeNotify"/>
    <view class="loading" wx:if="{
     
     {playerList.length === 0}}">
      <view class="loading-img">
        <image src="../../../static/images/loading.png" class="rotate-img"></image>
      </view>
      <view class="loading-text">等待接听中...</view>
    </view>
  </view>
  <view class="handle-btns">
    <view class="btn-normal" bindtap="_pusherAudioHandler">
      <image class="btn-image" src="{
     
     {pusher.enableMic? '../../../static/images/audio-true.png': '../../../static/images/audio-false.png'}} "></image>
    </view>
    <view class="btn-normal" bindtap="_pusherSwitchCamera" >
      <image class="btn-image" src="../../../static/images/switch.png"></image>
    </view>
    <view class="btn-normal" bindtap="_setPlayerSoundMode">
      <image class="btn-image" src="{
     
     {playerList[0].soundMode === 'ear' ? '../../../static/images/speaker-false.png': '../../../static/images/speaker-true.png'}} "></image>
    </view>
  </view>
  <view class="bottom-btns">
    <view class="btn-normal" data-key="beautyLevel" data-value="9|0" data-value-type="number" bindtap="_setPusherBeautyHandle">
      <image class="btn-image" src="{
     
     {pusher.beautyLevel == 9 ? '../../../static/images/beauty-true.png': '../../../static/images/beauty-false.png'}} "></image>
    </view>
    <view class="btn-hangup" bindtap="_hangUp">
      <image class="btn-image" src="../../../static/images/hangup.png"></image>
    </view>
  </view>
</view>

Sample code: room.js

import TRTC from '../../../static/trtc-wx'
Page({
    
    
  data: {
    
    
    _rtcConfig: {
    
    
      sdkAppID: '', //开通实时音视频服务创建应用后分配的sdkAppID
      roomID: '', //房间号可以由您的系统指定
      userID: '', //用户ID可以由您的系统指定
      userSig: '', //身份签名,相当于登录密码的作用
    },
    roomID: 0,
    pusher: null,
    playerList: [],
  },
  onLoad(options) {
    
    
    this.TRTC = new TRTC(this)
    const pusher = this.TRTC.createPusher({
    
    beautyLevel: 9})
    this.setData({
    
    
      _rtcConfig: {
    
    
        userID: options.userID,
        sdkAppID: options.sdkAppID,
        userSig: options.userSig,
        roomID: options.roomID,
      },
      pusher: pusher.pusherAttributes
    })
    //事件监听
    this.bindTRTCRoomEvent()
    //进入房间
    this.setData({
    
    
      pusher: this.TRTC.enterRoom(this.data._rtcConfig),
    }, () => {
    
    
      this.TRTC.getPusherInstance().start() 
    })
  },
  //设置pusher属性
  setPusherAttributesHandler(options) {
    
    
    this.setData({
    
    
      pusher: this.TRTC.setPusherAttributes(options),
    })
  },
  //设置某个player属性
  setPlayerAttributesHandler(player, options) {
    
    
    this.setData({
    
    
      playerList: this.TRTC.setPlayerAttributes(player.streamID, options),
    })
  },
  //事件监听
  bindTRTCRoomEvent() {
    
    
    const TRTC_EVENT = this.TRTC.EVENT
    this.TRTC.on(TRTC_EVENT.ERROR, (event) => {
    
    
      console.log('* room ERROR', event)
    })
    //成功进入房间
    this.TRTC.on(TRTC_EVENT.LOCAL_JOIN, (event) => {
    
    
      console.log('* room LOCAL_JOIN', event)
      this.setPusherAttributesHandler({
    
     enableCamera: true })
      this.setPusherAttributesHandler({
    
     enableMic: true })
    })
    //成功离开房间
    this.TRTC.on(TRTC_EVENT.LOCAL_LEAVE, (event) => {
    
    
      console.log('* room LOCAL_LEAVE', event)
    })
    //远端用户退出
    this.TRTC.on(TRTC_EVENT.REMOTE_USER_LEAVE, (event) => {
    
    
      const {
    
     playerList } = event.data
      this.setData({
    
    
        playerList: playerList
      })
      console.log('* room REMOTE_USER_LEAVE', event)
    })
    //远端用户推送视频
    this.TRTC.on(TRTC_EVENT.REMOTE_VIDEO_ADD, (event) => {
    
    
      console.log('* room REMOTE_VIDEO_ADD',  event)
      const {
    
     player } = event.data
      // 开始播放远端的视频流,默认是不播放的
      this.setPlayerAttributesHandler(player, {
    
     muteVideo: false })
    })
    // 远端用户取消推送视频
    this.TRTC.on(TRTC_EVENT.REMOTE_VIDEO_REMOVE, (event) => {
    
    
      console.log('* room REMOTE_VIDEO_REMOVE', event)
      const {
    
     player } = event.data
      this.setPlayerAttributesHandler(player, {
    
     muteVideo: true })
    })
    // 远端用户推送音频
    this.TRTC.on(TRTC_EVENT.REMOTE_AUDIO_ADD, (event) => {
    
    
      console.log('* room REMOTE_AUDIO_ADD', event)
      const {
    
     player } = event.data
      this.setPlayerAttributesHandler(player, {
    
     muteAudio: false })
    })
    // 远端用户取消推送音频
    this.TRTC.on(TRTC_EVENT.REMOTE_AUDIO_REMOVE, (event) => {
    
    
      console.log('* room REMOTE_AUDIO_REMOVE', event)
      const {
    
     player } = event.data
      this.setPlayerAttributesHandler(player, {
    
     muteAudio: true })
    })
  },
  //挂断退出房间
  _hangUp() {
    
    
    const result = this.TRTC.exitRoom()
    this.setData({
    
    
      pusher: result.pusher,
      playerList: result.playerList,
    })
    wx.navigateBack({
    
    delta: 1,})
  },
  //设置美颜
  _setPusherBeautyHandle() {
    
    
    const  beautyLevel = this.data.pusher.beautyLevel === 0 ? 9 : 0
    this.setPusherAttributesHandler({
    
     beautyLevel })
  },
  //发布/取消发布Audio
  _pusherAudioHandler() {
    
    
    if (this.data.pusher.enableMic) {
    
    
      this.setPusherAttributesHandler({
    
     enableMic: false })
    } else {
    
    
      this.setPusherAttributesHandler({
    
     enableMic: true })
    }
  },
  _pusherSwitchCamera() {
    
    
    const  frontCamera = this.data.pusher.frontCamera === 'front' ? 'back' : 'front'
    this.TRTC.getPusherInstance().switchCamera(frontCamera)
  },
  _setPlayerSoundMode() {
    
    
    if (this.data.playerList.length === 0) {
    
    
      return
    }
    const player = this.TRTC.getPlayerList()
    const soundMode = player[0].soundMode === 'speaker' ? 'ear' : 'speaker'
    this.setPlayerAttributesHandler(player[0], {
    
     soundMode })
  },
  _pusherStateChangeHandler(event) {
    
    
    this.TRTC.pusherEventHandler(event)
  },
  _pusherNetStatusHandler(event) {
    
    
    this.TRTC.pusherNetStatusHandler(event)
  },
  _pusherErrorHandler(event) {
    
    
    this.TRTC.pusherErrorHandler(event)
  },
  _pusherBGMStartHandler(event) {
    
    
    this.TRTC.pusherBGMStartHandler(event)
  },
  _pusherBGMProgressHandler(event) {
    
    
    this.TRTC.pusherBGMProgressHandler(event)
  },
  _pusherBGMCompleteHandler(event) {
    
    
    this.TRTC.pusherBGMCompleteHandler(event)
  },
  _pusherAudioVolumeNotify(event) {
    
    
    this.TRTC.pusherAudioVolumeNotify(event)
  },
  _playerStateChange(event) {
    
    
    this.TRTC.playerEventHandler(event)
  },
  _playerFullscreenChange(event) {
    
    
    this.TRTC.playerFullscreenChange(event)
  },
  _playerNetStatus(event) {
    
    
    this.TRTC.playerNetStatus(event)
  },
  _playerAudioVolumeNotify(event) {
    
    
    this.TRTC.playerAudioVolumeNotify(event)
  },
})

Generate user signature

To enter the room page from the entry page of the applet, the following parameters need to be passed:

  • sdkAppID: the sdkAppID assigned after the real-time audio and video service is activated and the application is created
  • roomID: room number
  • userID: user ID
  • userSig: Identity signature
    UserSig is a security protection signature designed by Tencent Cloud to prevent malicious attackers from stealing your right to use cloud services. To use cloud services, you need to provide SDKAppID, UserID and UserSig three key information in the initialization or login function of the corresponding SDK. Among them, SDKAppID is used to identify your application, UserID is used to identify your user, and UserSig is a security signature calculated based on the former two, which is calculated by the HMAC SHA256 encryption algorithm. As long as an attacker cannot forge a UserSig, they cannot steal your cloud service traffic. The server code for calculating UserSig is as follows:
func GenUserSig(sdkappid int, key string, userid string, expire int) (string, error) {
    
    
    currTime := time.Now().Unix()
    var sigDoc map[string]interface{
    
    }
    sigDoc = make(map[string]interface{
    
    })
    sigDoc["TLS.ver"] = "2.0"
    sigDoc["TLS.identifier"] = identifier
    sigDoc["TLS.sdkappid"] = sdkappid
    sigDoc["TLS.expire"] = expire
    sigDoc["TLS.time"] = currTime
    sigDoc["TLS.sig"] = _hmacsha256(sdkappid, key, userid, currTime, expire)
    data, _:= json.Marshal(sigDoc)

    var b bytes.Buffer
    w := zlib.NewWriter(&b)
    w.Write(data)
    w.Close()
    return base64urlEncode(b.Bytes()), nil
}
func base64urlEncode(data []byte) string {
    
    
    str := base64.StdEncoding.EncodeToString(data)
    str = strings.Replace(str, "+", "*", -1)
    str = strings.Replace(str, "/", "-", -1)
    str = strings.Replace(str, "=", "_", -1)
    return str
}
func _hmacsha256(sdkappid int, key string, identifier string, currTime int64, expire int) string {
    
    
    content := "TLS.identifier:" + identifier + "\n"
    content += "TLS.sdkappid:" + strconv.Itoa(sdkappid) + "\n"
    content += "TLS.time:" + strconv.FormatInt(currTime, 10) + "\n"
    content += "TLS.expire:" + strconv.Itoa(expire) + "\n"
    h := hmac.New(sha256.New, []byte(key))
    h.Write([]byte(content ))
    return base64.StdEncoding.EncodeToString(h.Sum(nil))
}

Guess you like

Origin blog.csdn.net/gz_hm/article/details/127081790