HTML5 WebRTC What's new (interactive audio and video data)

1 Overview

WebRTC is an acronym for "real-time communication network" (Web Real Time Communication), which is mainly used to make the browser and get real-time exchange of video, audio and data.

WebRTC is divided into three API.

  • MediaStream (also known as getUserMedia)
  • RTCPeerConnection
  • RTCDataChannel

getUserMedia used to acquire the video and audio information, the two API for data exchange between the browser.

2、getUserMedia

2.1 Introduction

First, check whether the browser supports getUserMedia method.

navigator.getUserMedia || 
    (navigator.getUserMedia = navigator.mozGetUserMedia ||  navigator.webkitGetUserMedia || navigator.msGetUserMedia);

if (navigator.getUserMedia) {
    //do something
} else {
    console.log('your browser not support getUserMedia');
}

Chrome21, Opera 18 and Firefox 17 supports this method, the current IE does not support, the above code msGetUserMedia just to ensure future compatibility.

getUserMedia method takes three parameters.

getUserMedia(streams, success, error);

The following meanings:

  • streams: an object representing a multimedia device which comprises
  • success: callback, getting a successful call to multimedia devices
  • error: callback function gets called multimedia equipment failure

Usage is as follows:

navigator.getUserMedia({
    video: true,
    audio: true
}, onSuccess, onError);

The above code is used to obtain real-time information of the camera and microphone.

If a page uses getUserMedia, the browser will ask the user whether to permit the provision of information. If the user declines, it calls the callback function onError.

When an error occurs, the callback function is a parameter Error object, it has a code parameter values ​​are as follows:

  • PERMISSION_DENIED: user refuse to provide information.
  • NOT_SUPPORTED_ERROR: browser does not support the specified media type.
  • MANDATORY_UNSATISHIED_ERROR: Specifies the media type does not receive media streams.

2.2 shows the camera image

The image captured by the camera user's displayed on the page, you need to place a video element on the page. The image on the display of this element.

<video id="webcam"></video>

Then, get this element in code.

Ali cloud - to promote AD
function onSuccess(stream) {

    var video = document.getElementById('webcam');

    //more code
}

Finally, the src attribute of this element binding data stream, the image captured by the camera can show.

function onSuccess(stream) {

    var video = document.getElementById('webcam');

    if (window.URL) {
        video.src = window.URL.createObjectURL(stream);
    } else {
        video.src = stream;
    }

    video.autoplay = true;
    //or video.play();
}

Its main purpose is to allow users to use the camera to take a photo.

2.3 microphone captures sound

Capture sound via the browser, relatively complex, it needs Web Audio API.

the onSuccess function (Stream) { 

    // create an audio environment as 
    AudioContext = window.AudioContext || window.webkitAudioContext; 
    context = new new AudioContext (); 

    // this sound input to the image 
    audioInput = context.createMediaStreamSources (Stream); 

    / / set volume node 
    volume = context.createGain (); 
    audioInput.connect (volume); 

    // create the cache for caching sound 
    var bufferSize = 2048; 

    // create a cache of the node voice, createJavaScriptNode method 
    // 2nd and the third parameter refers to input and output are dual-channel. 
    = context.createJavaScriptNode Recorder (bufferSize, 2, 2); 

    // callback recording process is basically the left and right channel sound 
    // into the cache, respectively. 
    = function recorder.onaudioprocess (E) { 
        the console.log ( 'Recording'); 
        var e.inputBuffer.getChannelData left = (0);
        e.inputBuffer.getChannelData right = var (. 1); 
        // The clone the Samples WE 
        leftchannel.push (new new Float32Array (left)); 
        rightchannel.push (new new Float32Array (right)); 
        recordingLength + = bufferSize; 
    } 

    // the volume nodes connected to the caching node, in other words, the volume is the input node 
    // and outputs intermediate links. 
    volume.connect (Recorder); 

    // output destination of the cache node is connected, can be a microphone, it can also 
    // is an audio file. 
    recorder.connect (context.destination); 

}

3, real-time data exchange

Further two WebRTC API, RTCPeerConnection point for the connection between the browser, RTCDataChannel data for point to point communication.

RTCPeerConnection with a browser prefix, Chrome browser as webkitRTCPeerConnection, Firefox browser as mozRTCPeerConnection. Google maintains a library adapter.js, to abstract away the differences between browsers.

var dataChannelOptions = {
  ordered: false, // do not guarantee order
  maxRetransmitTime: 3000, // in milliseconds
};

var peerConnection = new RTCPeerConnection();

// Establish your peer connection using your signaling channel here
var dataChannel =
  peerConnection.createDataChannel("myLabel", dataChannelOptions);

dataChannel.onerror = function (error) {
  console.log("Data Channel Error:", error);
};

dataChannel.onmessage = function (event) {
  console.log("Got Data Channel Message:", event.data);
};

dataChannel.onopen = function () {
  dataChannel.send("Hello World!");
};

dataChannel.onclose = function () {
  console.log("The Data Channel is Closed");
};

4, reference links

[1] Andi Smith, Get Started with WebRTC

[2] Thibault Imbert, From microphone to .WAV with: getUserMedia and Web Audio

[3] Ian Devlin, Using the getUserMedia API with the HTML5 video and canvas elements

[4] Eric Bidelman, Capturing Audio & Video in HTML5

[5] Sam Dutton, Getting Started with WebRTC

[6] And Ristic,  WebRTC of data channels

[7] Ruanyf,  WebRTC

 

 

Transfer from http://www.4u4v.net/html5-xin-te-xing-zhi-webrtc-yin-shi-pin-shu-ju-jiao-hu.html

Guess you like

Origin www.cnblogs.com/zhishaofei/p/12465269.html