WebRTC recording video principle

1. Introduction to recording video principle

2. Recording video api provided in WebRTC

webrtc provides a MediaRecoder class, which is used to record media data by listening to the name.

In fact, this class is indeed used to realize the recording of media streams. It has many methods and events. Let's look at them one by one. First, let's look at the basic format. Let's take a brief look at its construction method:

var mediaRecorder = new MediaRecorder(stream,[options]);

In the construction method, we need to pass in two parameters, the first parameter stream (stream data), that is, the stream we get through getUserMedia, and the second parameter is the option option, which can configure some of what we want Restrictions.

The benefits of this article, free C++ audio and video learning materials package, technical video/code, including (audio and video development, interview questions, FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, codec, push-pull stream, srs)↓↓↓ ↓↓↓See below↓↓Click at the bottom of the article to get it for free↓↓

Parameter description of MediaRecorder:

  • stream : Media stream, can be obtained from getUserMedia,, or
  • options : Restriction options, including mimeType, audioBitsPerSecond, videoBitsPerSecond, bitsPerSecond.

Next, let's look at the specific meaning of the option's restriction option as follows:

option type

meaning

illustrate

mimeType

Media type: (1) video/webm (2) audio/webm (3) video/webm;codescs=vp8 (4)video/webm;codecs=h264 (5)audio/webm;codecs=opus

mimeType refers to whether you want to record audio or video, what is the recording format, the most common video is video/mp4; video encoding vp8, audio is audio/mp3; encoding ac;

audioBitsPerSecond

audio bit rate

audioBitsPerSecond refers to the bit rate of audio per second, the bit rate is based on the encoding situation, some are 64k and some are 128k;

videoBitsPerSecond

Video bit rate

videoBitsPerSecond refers to the bit rate of the video. The bit rate of the video is relatively large for our general storage files. It must be 1M. The resolution is, for example, 720P, which may be more than 2M. Of course, the more you set, the higher the resolution. If the setting is smaller, the clarity will be worse.

bitsPerSecond

overall code rate

bitsPerSecond indicates the overall code rate

In addition to the above construction methods, let's take a look at some commonly used APIs:

api

use

illustrate

start(timeslice)

start recording

Start recording media, timeslice is optional, if it is not filled, all its data will be stored in a large buffer, if set, it will store data by time slice, for example, 10 seconds is one piece of data and 10 seconds is another a piece of data.

stop()

stop recording

At this point the dataavilable event is fired including the final blob data

pause()

pause recording

resume()

resume recording

isTypeSupported()

Does the type support

Check the file formats that support recording, such as mp4, webp, mp3, these can be checked through this API to see if they are supported

In addition, MediaRecorder in WebRTC also provides many event callbacks:

  • ondataavailable event: This event will be triggered when the data is available, so we can listen to this event. When the data is available, we can directly store the data in the buffer. In this event, an event will actually be passed. In this There will be a data in the event, this data is the real recorded data, you can get this data and store it later. Triggered periodically every time data is recorded for a certain amount of time (or when the entire data is recorded if no time slice is specified).
  • onerror event: When an error occurs, the recording will be automatically stopped

3. Browser recording video in WebRTC

The above briefly introduces some related APIs about recording video in WebRTC. This API looks very simple, just like the media players provided by Apple and Android. Media playback can be realized through simple start and stop. Next, let's practice it in practice.

The content of the entire html is not much different from the previous chapters. Before the development, we will sort out the whole process for everyone:

  • First of all, we need to add a few tags, the first tag is the video tag, that is to say, when we start the recording, the recorded video will be played through the second video tag, so we need to add the second video tag,
  • In addition, we need to add three buttons. The first button is to record. After clicking the record, we can record the audio and video data we collected; the second is play. When we click play, it will record the data we recorded. After playing, the third button is download. When we click this button, the recorded data can be directly downloaded to the local.

Without further ado, go directly to the source code, the content of index.html is as follows:

<html>
  <head>
    <title>WebRTC中浏览器实现录制视频--孔雨露 20200606</title>
		<style>
			.none {
				-webkit-filter: none;	
			}
 
			.blur {
        /* 特效模糊 */
				-webkit-filter: blur(3px);	
			}
 
			.grayscale {
        /* 特效灰度 */
				-webkit-filter: grayscale(1); 	
			}
 
			.invert {
        /* 翻转 */
				-webkit-filter: invert(1);	
			}
 
			.sepia {
        /* 特效褐色 */
				-webkit-filter: sepia(1);
			}
 
		</style>
  </head>
  <body>
		<div>
			<label>audio Source:</label>
			<select id="audioSource"></select>
		</div>
 
		<div>
			<label>audio Output:</label>
			<select id="audioOutput"></select>
		</div>
 
		<div>
			<label>video Source:</label>
			<select id="videoSource"></select>
    </div>
    <!-- 特效选择器 -->
    <div>
			<label>Filter:</label>
			<select id="filter">
				<option value="none">None</option>
				<option value="blur">blur</option>
				<option value="grayscale">Grayscale</option>
				<option value="invert">Invert</option>
				<option value="sepia">sepia</option>
			</select>
		</div>
    <!-- 
      我们创建一个video标签,这个标签就可以显示我们捕获的音视频数据 
      autoplay 表示当我们拿到视频源的时候直接播放
      playsinlin  表示在浏览器页面中播放而不是调用第三方工具
     -->
     <!-- 通过audio标签只获取音频 -->
     <!-- 
       controls  表示将暂停和播放按钮显示出来,否则它虽然播放声音,但是不会显示播放器窗口
       autoplay  默认自动播放
      -->
    <!-- <audio autoplay controls id='audioplayer'></audio> -->
		<table>
			<tr>
				<td><video autoplay playsinline id="player"></video></td>
				<!-- 添加标签 -->
				<td><video playsinline id="recplayer"></video></td>
				<td><div id='constraints' class='output'></div></td>
      </tr>
			<tr>
				<td><button id="record">Start Record</button></td>
				<td><button id="recplay" disabled>Play</button></td>
				<td><button id="download" disabled>Download</button></td>
			</tr>
		</table>
 
    <!-- 获取视频帧图片按钮 -->
		<div>
			<button id="snapshot">Take snapshot</button>
    </div>
    <!-- 获取视频帧图片显示在canvas里面 -->
		<div>
			<canvas id="picture"></canvas>
		</div>
    <!-- 引入 adapter.js库 来做 不同浏览器的兼容 -->
    <script src="https://webrtc.github.io/adapter/adapter-latest.js"></script>
    <script src="./js/client.js"></script>
  </body>
</html>

The content of client.js in the above html is as follows:

'use strict'
 
var audioSource = document.querySelector('select#audioSource');
var audioOutput = document.querySelector('select#audioOutput');
var videoSource = document.querySelector('select#videoSource');
// 获取video标签
var videoplay = document.querySelector('video#player');
// 获取音频标签
var audioplay = document.querySelector('audio#audioplayer');
 
//div
var divConstraints = document.querySelector('div#constraints');
 
// 定义二进制数组
var buffer;
var mediaRecorder;
 
//record 视频录制 播放 下载按钮
var recvideo = document.querySelector('video#recplayer');
var btnRecord = document.querySelector('button#record');
var btnPlay = document.querySelector('button#recplay');
var btnDownload = document.querySelector('button#download');
 
//filter 特效选择
var filtersSelect = document.querySelector('select#filter');
 
//picture 获取视频帧图片相关的元素
var snapshot = document.querySelector('button#snapshot');
var picture = document.querySelector('canvas#picture');
picture.width = 640;
picture.height = 480;
 
// deviceInfos是设备信息的数组
function gotDevices(deviceInfos){
  // 遍历设备信息数组, 函数里面也有个参数是每一项的deviceinfo, 这样我们就拿到每个设备的信息了
	deviceInfos.forEach(function(deviceinfo){
    // 创建每一项
		var option = document.createElement('option');
		option.text = deviceinfo.label;
		option.value = deviceinfo.deviceId;
	
		if(deviceinfo.kind === 'audioinput'){ // 音频输入
			audioSource.appendChild(option);
		}else if(deviceinfo.kind === 'audiooutput'){ // 音频输出
			audioOutput.appendChild(option);
		}else if(deviceinfo.kind === 'videoinput'){ // 视频输入
			videoSource.appendChild(option);
		}
	})
}
 
// 获取到流做什么, 在gotMediaStream方面里面我们要传人一个参数,也就是流,
// 这个流里面实际上包含了音频轨和视频轨,因为我们通过constraints设置了要采集视频和音频
// 我们直接吧这个流赋值给HTML中赋值的video标签
// 当时拿到这个流了,说明用户已经同意去访问音视频设备了
function gotMediaStream(stream){  
  	// audioplay.srcObject = stream;
  videoplay.srcObject = stream; // 指定数据源来自stream,这样视频标签采集到这个数据之后就可以将视频和音频播放出来
  // 通过stream来获取到视频的track 这样我们就将所有的视频流中的track都获取到了,这里我们只取列表中的第一个
  var videoTrack = stream.getVideoTracks()[0];
  // 拿到track之后我们就能调用Track的方法
  var videoConstraints = videoTrack.getSettings(); // 这样就可以拿到所有video的约束
  // 将这个对象转化成json格式
  // 第一个是videoConstraints, 第二个为空, 第三个表示缩进2格
  divConstraints.textContent = JSON.stringify(videoConstraints, null, 2);
  
  window.stream = stream;
 
  // 当我们采集到音视频的数据之后,我们返回一个Promise
  return navigator.mediaDevices.enumerateDevices();
}
 
function handleError(err){
	console.log('getUserMedia error:', err);
}
function start() {
// 判断浏览器是否支持
if(!navigator.mediaDevices ||
  !navigator.mediaDevices.getUserMedia){
  console.log('getUserMedia is not supported!');
}else{
  // 获取到deviceId
  var deviceId = videoSource.value; 
  // 这里是约束参数,正常情况下我们只需要是否使用视频是否使用音频
  // 对于视频就可以按我们刚才所说的做一些限制
  var constraints = { // 表示同时采集视频金和音频
    video : {
      width: 640,	// 宽带
      height: 480,  // 高度
      frameRate:15, // 帧率
      facingMode: 'enviroment', //  设置为后置摄像头
      deviceId : deviceId ? deviceId : undefined // 如果deviceId不为空直接设置值,如果为空就是undefined
    }, 
    audio : true // 将声音获取设为true
  }
  //  从指定的设备中去采集数据
  navigator.mediaDevices.getUserMedia(constraints)
    .then(gotMediaStream)  // 使用Promise串联的方式,获取流成功了
    .then(gotDevices)
    .catch(handleError);
}
}
 
start();
 
// 当我选择摄像头的时候,他可以触发一个事件,
// 当我调用start之后我要改变constraints
videoSource.onchange = start;
 
// 选择特效的方法
filtersSelect.onchange = function(){
	videoplay.className = filtersSelect.value;
}
 
// 点击按钮获取视频帧图片
snapshot.onclick = function() {
  picture.className = filtersSelect.value;
  // 调用canvas API获取上下文,图片是二维的,所以2d,这样我们就拿到它的上下文了
  // 调用drawImage绘制图片,第一个参数就是视频,我们这里是videoplay,
  // 第二和第三个参数是起始点 0,0
  // 第四个和第五个参数表示图片的高度和宽度
	picture.getContext('2d').drawImage(videoplay, 0, 0, picture.width, picture.height);
}
// 
function handleDataAvailable(e){  // 5、获取数据的事件函数 当我们点击录制之后,数据就会源源不断的从这个事件函数中获取到
	if(e && e.data && e.data.size > 0){
     buffer.push(e.data);  // 将e.data放入二进制数组里面
    //  这个buffer应该是我们在开始录制的时候创建这个buffer
	}
}
 
// 2、录制方法
function startRecord(){
	buffer = []; // 定义数组
	var options = {
		mimeType: 'video/webm;codecs=vp8' // 录制视频 编码vp8
	}
	if(!MediaRecorder.isTypeSupported(options.mimeType)){ // 判断录制的视频 mimeType 格式浏览器是否支持
		console.error(`${options.mimeType} is not supported!`);
		return;	
	}
  try{ // 防止录制异常
    // 5、先在上面定义全局对象mediaRecorder,以便于后面停止录制的时候可以用到
		mediaRecorder = new MediaRecorder(window.stream, options); // 调用录制API // window.stream在gotMediaStream中获取
	}catch(e){
		console.error('Failed to create MediaRecorder:', e);
		return;	
  }
  // 4、调用事件 这个事件处理函数里面就会收到我们录制的那块数据 当我们收集到这个数据之后我们应该把它存储起来
  mediaRecorder.ondataavailable = handleDataAvailable; 
	mediaRecorder.start(10); // start方法里面传入一个时间片,每隔一个 时间片存储 一块数据
}
// 3、停止录制
function stopRecord(){
  // 6、调用停止录制
	mediaRecorder.stop();
}
 
// 1、录制视频 
btnRecord.onclick = ()=>{
	if(btnRecord.textContent === 'Start Record'){ // 开始录制
		startRecord();	// 调用startRecord方法开启录制
		btnRecord.textContent = 'Stop Record'; // 修改button的文案
		btnPlay.disabled = true; // 播放按钮状态禁止
		btnDownload.disabled = true; // 下载按钮状态禁止
	}else{ // 结束录制
		stopRecord(); // 停止录制
		btnRecord.textContent = 'Start Record';
		btnPlay.disabled = false; // 停止录制之后可以播放
		btnDownload.disabled = false; // 停止录制可以下载
 
	}
}
// 点击播放视频
btnPlay.onclick = ()=> {
	var blob = new Blob(buffer, {type: 'video/webm'});
	recvideo.src = window.URL.createObjectURL(blob);
	recvideo.srcObject = null;
	recvideo.controls = true;
	recvideo.play();
}
 
// 下载视频
btnDownload.onclick = ()=> {
	var blob = new Blob(buffer, {type: 'video/webm'});
	var url = window.URL.createObjectURL(blob);
	var a = document.createElement('a');
 
	a.href = url;
	a.style.display = 'none';
	a.download = 'kyl111.webm';
	a.click();
}

JS Knowledge Supplement

When recording video, we will use the method of storing data in js. Here are several commonly used methods of storing data: The most commonly used method is Blob, and finally a url or a file is generated using Blob. If it is The underlying data is generally an ArrayBuffer or a typed ArrayBufferView.

  • Blob : Blob is equivalent to a very efficient storage area. Other types of buffers can be placed in this Blob, so what is the advantage of it? It is very convenient to write the entire buffer into a file. Through JavaScript, generally when writing The file was put into the Blob before. Its bottom layer is actually an untyped data buffer of byte Array, and many methods are encapsulated in its upper layer. We can call this method to do various things conveniently. operate.
  • ArrayBuffer : Blob actually depends on ArrayBuffer, and ArrayBuffer can also use various data. Through the upper layer Blob, we can perform various operations on this ArrayBuffer, which means that Blob is the encapsulation of ArrayBuffer, making the operation of ArrayBuffer more efficient.
  • ArrayBufferView : In fact, it is various types of Buffer, such as integer, double, character and other buffers, which can be used as a parameter of Blob, and it will do related work at the bottom deal with

The benefits of this article, free C++ audio and video learning materials package, technical video/code, including (audio and video development, interview questions, FFmpeg, webRTC, rtmp, hls, rtsp, ffplay, codec, push-pull stream, srs)↓↓↓ ↓↓↓See below↓↓Click at the bottom of the article to get it for free↓↓

Guess you like

Origin blog.csdn.net/m0_60259116/article/details/131543177