[Reserved] entry WebRTC

Transfer: https: //www.cnblogs.com/cnhk19/p/9473519.html

 

What is WebRTC?

As we all know, the browser does not support each other to establish a direct channel for communication, it is a transit server. For example, there are now two clients, A and B, both of whom want to establish a channel of communication between, first need armor and server B and server. When A sends a message to B, A first message sent to the server, the server A relay message, sent to the B, and vice versa. Such a message between two channels through with B, while the efficiency of the communication is subject to the bandwidth of these two channels. At the same time such a channel is not suitable for transmission of the data stream, how to establish point to point transmission between the browser, it has been plagued by the developer. WebRTC came into being

WebRTC is an open source project that aims to make the browser to provide a simple JavaScript interface to the real-time communication (RTC). He said the simple point is to let the browser provide JS instant communication interface. The creation of the channel interface is not the same as WebSocket, open up a communication between the browser and server WebSocket, but through a series of signaling established between a browser and a browser (peer-to-peer) of channel, this channel may transmit any data, without going through the server. Audio and video may pass through WebRTC and MediaStream achieved, through the browser calls the device camera, a microphone, so that between the browser

WebRTC is already in our browser

Such a good function, the major browser vendors will naturally not be ignored. WebRTC is now ready to use the newer version of Chrome, Opera and Firefox, it gives a detailed browser on the famous browser compatibility check website caniuse compatible case

Also, according to earlier news 36Kr Google introduced support for WebRTC Web Audio and Android version of Chrome 29 @ 36kr and Opera for Android began to support WebRTC, allow users to implement voice and video chat plug-in the absence of any case , Android also begun to support WebRTC

Three interfaces

WebRTC the API implements three, namely:
* MediaStream: can be obtained by a video camera and microphone of the device through the API MediaStream, audio synchronization stream
* RTCPeerConnection: RTCPeerConnection WebRTC is used to build a stable point between efficient streaming components
* RTCDataChannel: RTCDataChannel that between the browser (point to point) to establish a high-throughput, low-latency channel used to transmit arbitrary data

Here roughly introduce three API

MediaStream(getUserMedia)

MediaStream API provides for the acquisition WebRTC video and audio stream data from the camera apparatus, a wireless microphone

W3C Standards

W3C Standards Portal

How to call

Navigator.getUserMedia with the door by calling (), this method takes three parameters:
1. a constraint object (constraints object), this latter speaks alone
2. A successful callback function call, if the call is successful, it is passed to a stream Object
3. a call failed callback function, if the call fails, the object passed to it an error

Browser Compatibility

Because browsers implement different, often before they achieve the standard version, the prefix in front of the method, so a compatible version like this

var getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia);

A super simple example

Here to write a super simple example to show the effect of getUserMedia:

<!doctype html> <html lang="zh-CN"> <head> <meta charset="UTF-8"> <title>GetUserMedia实例</title> </head> <body> <video id="video" autoplay></video> </body><script type="text/javascript"> var getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); getUserMedia.call(navigator, { video: true, audio: true }, function(localMediaStream) { var video = document.getElementById('video'); video.src = window.URL.createObjectURL(localMediaStream); video.onloadedmetadata = function(e) { console.log("Label: " + localMediaStream.label);console.log("AudioTracks" , localMediaStream.getAudioTracks()); console.log("VideoTracks" , localMediaStream.getVideoTracks()); }; }, function(e) {console.log('Reeeejected!', e); }); </script> </html>

This content will be saved in an HTML file on the server. With a newer version of Opera, Firefox, Chrome opens in the browser pop up asking whether to allow access to the camera and microphone, agreed to the election, the camera captured screen will appear on the browser

Note, HTML file to be placed on the server, otherwise it will get a NavigatorUserMediaError error, display PermissionDeniedError, the easiest way is to cd to the directory where the HTML file, and then python -m SimpleHTTPServer(if installed python), then enter in the browserhttp://localhost:8000/{文件名称}.html

As used herein getUserMedia, after obtaining a stream, it needs to be output, is generally bound to the videooutput on the label, you need window.URL.createObjectURL(localMediaStream)to create energy in videothe use of srcproperty play Blob URL, note the videoaddition of the autoplayproperty, or only capture a picture

Once you've created can flow through labelto get their unique identity attribute, but also through getAudioTracks()and getVideoTracks()to get an array of tracking the flow method (if not open a certain flow, it will be an array of objects tracked empty array)

Target constraint (Constraints)

Constraint object may be disposed getUserMedia()and RTCPeerConnection the addStreammethod, the object constraint is used to specify what WebRTC received stream, wherein the attribute can be defined as follows:
* Video: whether to accept the video stream
* audio: audio stream whether to accept
* MinWidth: the minimum width of the video stream
* MaxWidth: the maximum width of the video stream
* MinHeight: minimum height of the video stream
* MaxHiehgt: the maximum height of the video stream
* MinAspectRatio: minimum aspect ratio of the video stream
* MaxAspectRatio: maximum aspect ratio of the video stream
* MinFramerate: minimum frame rate of the video stream
* MaxFramerate: maximum frame rate of the video stream

详情见Resolution Constraints in Web Real Time Communications draft-alvestrand-constraints-resolution-00

RTCPeerConnection

Use RTCPeerConnection WebRTC stream to pass data between the browser, the data path is point to point streaming, the server does not require transfer. But this does not mean that we can abandon the servers, we still need it to transmit signaling (signaling) for us to build this channel. WebRTC protocol establishment is not defined for the signaling channel: the part of the signaling is not RTCPeerConnection API

Signaling

Since there is no definition of a specific signaling protocol, we can choose any way (AJAX, WebSocket), using any protocol (SIP, XMPP) to pass signaling channel is established, for example, I wrote the Demo , the node is to use the ws module, is transmitted on the signaling WebSocket

We need to exchange signaling information in three ways:
* session information: there is a communication error is used to initialize
* network configuration: such as IP address and port Han
* media adaptation: the sender and the recipient to accept what browser encoder resolution and

The information exchange would be completed before the point to point streaming, a general architecture is shown below:

Establishing a channel by the server

Here again, even if WebRTC provide point to point channel between the browser for data transmission, but the establishment of the channel, must involve the server. WebRTC be required server function supports four areas:
1. User discovery and communication
2. signaling
3. NAT / firewall traversal
4. If failed to establish point to point communication, can be used as a transit server

NAT / firewall traversal technology

A common problem to establish peer channel, that is, NAT traversal technology. In a NAT device is in use between private TCP / IP hosts on the network need to establish the need to use NAT traversal technology connection. In the past often encounter this problem in the field of VoIP. Now there are many NAT traversal technology, but no one is perfect, because the NAT behavior are not standardized. Most of these techniques use a public server, this service uses the IP address that can be accessed from anywhere in the world to get. In RTCPeeConnection using ICE framework to ensure RTCPeerConnection to achieve NAT traversal

ICE, full name Interactive Connectivity Establishment (Interactive Connectivity Establishment), a comprehensive NAT traversal technology, it is a framework that can integrate various NAT traversal technologies such as STUN, TURN (Traversal Using Relay NAT NAT achieve relay penetration). ICE will first use STUN, try to establish a connection based on UDP, and if that fails, it will go to TCP (first try HTTP, and then try to HTTPS), if still fails ICE will use a relay TURN server.

We can use Google's STUN server: stun:stun.l.google.com:19302Ever since, a framework ICE integrated architecture should be long like this

Browser Compatibility

Prefix or a different issue, and using a method similar to the above:

var PeerConnection = (window.PeerConnection || window.webkitPeerConnection00 || window.webkitRTCPeerConnection || window.mozRTCPeerConnection);

Creating and Using

//使用Google的stun服务器 var iceServer = { "iceServers": [{ "url": "stun:stun.l.google.com:19302" }] }; //兼容浏览器的getUserMedia写法 var getUserMedia = (navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia || navigator.msGetUserMedia); //兼容浏览器的PeerConnection写法 var PeerConnection = (window.PeerConnection || window.webkitPeerConnection00 || window.webkitRTCPeerConnection || window.mozRTCPeerConnection); //与后台服务器的WebSocket连接 var socket = __createWebSocketChannel(); //创建PeerConnection实例 var pc = new PeerConnection(iceServer); //发送ICE候选到其他客户端 pc.onicecandidate = function(event){ socket.send(JSON.stringify({ "event": "__ice_candidate", "data": { "candidate": event.candidate } })); }; //如果检测到媒体流连接到本地,将其绑定到一个video标签上输出 pc.onaddstream = function(event){ someVideoElement.src = URL.createObjectURL(event.stream); }; //获取本地的媒体流,并绑定到一个video标签上输出,并且发送这个媒体流给其他客户端 getUserMedia.call(navigator, { "audio": true, "video": true }, function(stream){ //发送offer和answer的函数,发送本地session描述 var sendOfferFn = function(desc){ pc.setLocalDescription(desc); socket.send(JSON.stringify({ "event": "__offer", "data": { "sdp": desc } })); }, sendAnswerFn = function(desc){ pc.setLocalDescription(desc); socket.send(JSON.stringify({ "event": "__answer", "data": { "sdp": desc } })); }; //绑定本地媒体流到video标签用于输出 myselfVideoElement.src = URL.createObjectURL(stream); //向PeerConnection中加入需要发送的流 pc.addStream(stream); //如果是发送方则发送一个offer信令,否则发送一个answer信令 if(isCaller){ pc.createOffer(sendOfferFn); } else { pc.createAnswer(sendAnswerFn); } }, function(error){ //处理媒体流创建失败错误 }); //处理到来的信令 socket.onmessage = function(event){ var json = JSON.parse(event.data); //如果是一个ICE的候选,则将其加入到PeerConnection中,否则设定对方的session描述为传递过来的描述 if( json.event === "__ice_candidate" ){ pc.addIceCandidate(new RTCIceCandidate(json.data.candidate)); } else { pc.setRemoteDescription(new RTCSessionDescription(json.data.sdp)); } };

Examples

More complex as it involves a flexible signaling transmission, do not make it a brief example, directly to the final venue

RTCDataChannel

Since it can establish peer channel to deliver real-time video, audio streams, why not use this channel transmit other data that it? RTCDataChannel API is used to doing that, based on which we can transfer any data between the browser. DataChannel is based on PeerConnection can not be used alone

Use DataChannel

We can use channel = pc.createDataCHannel("someLabel");to create Data Channel PeerConnection on the instance and give it a label

DataChannel use WebSocket almost the same, there are several events:
* the OnOpen
* OnClose
* onmessage
* onerror

At the same time it has several states, you can readyStateget:
* Connecting: between the browser is trying to establish Channel
* Open: successfully established, you can use senda method of transmitting data up
* closing: the browser is closed Channel
* Closed: Channel has been closed

Two methods of exposure:
* Close (): for closing the channel
* Send (): used to send data to each other through the channel

Data Channel to send files through general idea

JavaScript File API has been provided from input[type='file']the elements extracted files, and to convert the file into DataURL by FileReader, this also means that we can be divided into multiple fragments to DataURL to transfer files by Channel

A comprehensive Demo

Demo-SkyRTC , which I wrote a Demo. Establish a video chat room and be able to broadcast files, of course, also supports on-one file transfer, written very rough, the latter will continue to improve

Use

  1. Download and unzip cd to the directory
  2. Running npm installthe installation dependent libraries (express, ws, node-uuid )
  3. Run node server.js, access localhost:3000, allow camera access
  4. Open another computer, the browser (Chrome and Opera, not yet compatible with Firefox) to open {server所在IP}:3000, allowing the camera and microphone access
  5. Broadcast file: Select a file in the lower left corner, click on the "Send File" button
  6. Broadcast information: the lower left corner input box to enter information, click send
  7. May be wrong, pay attention to F12 dialog box, usually F5 to solve

Features

Video audio chat (connecting the camera and microphone, there must be at least a camera), a broadcast file (can be transmitted separately, provided the API, the broadcast is based on the individual propagation achieved, which can propagate a plurality of small files better said, large files wait memory eat light), the broadcast chat message

Reference material

Guess you like

Origin www.cnblogs.com/markiki/p/11443230.html