Backend services required to build WebRTC applications - STUN, TURN, and signaling

As an introduction, this article is   translated from https://www.html5rocks.com/en/tutorials/webrtc/infrastructure/ . It is an introduction to WebRTC theory. It feels like it is an article that explains webrtc more clearly and clearly. It will be easier to learn the SDK based on operating system transplantation in the future.

WebRTC supports peer-to-peer communication, but it still requires a server so clients can exchange metadata to coordinate communication through a process called signaling, and deal with network address translators (NATs) and firewalls. This article shows you how to build a signaling service and deal with the issues of real connections to STUN and TURN servers. It also explains how WebRTC applications can handle multi-party calls and interact with services like VoIP and PSTN (also known as telephony). If you are not familiar with the basics of WebRTC, please refer to my translation of Get started with WebRTC /Original English link  Get started with WebRTC before reading this article

What is signaling?

Signaling is the process of coordinating communications. In order for a WebRTC application to establish a call, its client needs to exchange the following information:

  • Session control messages used to open or close communication
  • error message
  • Media metadata such as codecs, codec settings, bandwidth and media type
  • Key data used to establish a secure connection
  • Network data, such as the IP address and port of the host seen by the outside world

This signaling process requires a convenient way to pass messages back and forth between clients. The WebRTC API does not implement this mechanism. You need to build it yourself. Later in this article, you'll learn how to build a signaling service. But first, you need some context as a prerequisite.

Why doesn't WebRTC define signaling?

To avoid redundancy and maximize compatibility with existing technologies, the WebRTC standard does not specify signaling methods and protocols. JavaScript Session Establishment Protocol JavaScript Session Establishment Protocol (JSEP) outlines signaling protocols and methods.

The idea behind WebRTC call setup is to fully specify and control the media plane, but leave the signaling plane as much as possible to the application. The rationale is that different applications may prefer to use a different protocol, such as the existing SIP or Jingle call signaling protocols, or something customized for a specific application, perhaps for a novel use case. In this approach, the key information that needs to be exchanged is the multimedia session description, which specifies the necessary transport and media configuration information required to establish the media plane.

JSEP's architecture also avoids the browser having to save state (i.e. act as a signaling state machine). This would be problematic if, for example, the signal data was lost every time the page was reloaded. Instead, the signaling state can be saved on the server.

JSEP requires that the media metadata mentioned above be exchanged between offer  and answer  peers. Offers   and answers   are communicated in Session Description Protocol (SDP) format, as shown below:

v=0
o=- 7614219274584779017 2 IN IP4 127.0.0.1
s=-
t=0 0
a=group:BUNDLE audio video
a=msid-semantic: WMS
m=audio 1 RTP/SAVPF 111 103 104 0 8 107 106 105 13 126
c=IN IP4 0.0.0.0
a=rtcp:1 IN IP4 0.0.0.0
a=ice-ufrag:W2TGCZw2NZHuwlnf
a=ice-pwd:xdQEccP40E+P0L5qTyzDgfmW
a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
a=mid:audio
a=rtcp-mux
a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:9c1AHz27dZ9xPI91YNfSlI67/EMkjHHIHORiClQe
a=rtpmap:111 opus/48000/2

Ever wonder what all this SDP gobbledygook actually means? Take a look at Internet Engineering Task Force (IETF) examples . Internet Engineering Task Force (IETF) examples.

Keep in mind that WebRTC is designed so that the offer or answer can be adjusted by editing the value in the SDP text before setting it to the local or remote description. For example, the preferredAudioCodec function in appr.tc can be used to set the default codec and bitrate. JavaScript parsing using SDP is a bit of a pain, and there is ongoing discussion about whether future versions of WebRTC should use JSON, but sticking with SDP has some  advantages of its own  .

 

RTCPeerConnection API and signaling: Offer, answer, and candidate

RTCPeerConnection is an API used by WebRTC applications to create connections and communicate audio and video between peers. To initialize this process, RTCPeerConnection has two tasks:

  • Determine local media conditions such as resolution and codec capabilities. This is metadata used for the offer-and-answer mechanism.
  • Obtain the potential network address of the application host, called a candidate address.

Once this local data is determined, it must be exchanged with the remote peer via signaling mechanisms. Imagine Alice trying to call Eve  . Here are all the details of the complete quote/answer mechanism:

  1. Alice creates  RTCPeerConnection the object.
  2. Alice uses the RTCPeerConnection createOffer method to create an offer (SDP session description).
  3. Alice calls setLocalDescription with her offer.
  4. Alice stringifies the offer and sends it to Eve using a signaling mechanism.

  5. Eve calls to  setRemoteDescription() set Alice's offer so that Eve's own RTCPeerConnection knows Alice's settings.
  6. Eve is called  createAnswer() and the success callback for this operation is passed to the local session description - Eve's answer.
  7. setLocalDescription()Eve sets answer to the local description by calling .
  8. Eve then uses a signaling mechanism to send her answered answer to Alice.
  9. Alice uses setRemoteDescription()Eve's offer to be set to the remote session description.

Alice and Eve also need to exchange network information. The description "finding candidates" refers to  the process of finding network interfaces and ports using the ICE framework .

  1. Alice creates RTCPeerConnection the object and attaches  onicecandidate the processing callback.
  2. onicecandidate The handler will be called when a network candidate is available .
  3. In onicecandidate the handler, Alice sends the stringified candidate data to Eve via its signaling channel.
  4. When Eve receives a candidate message from Alice, she calls addIceCandidate to add the candidate to the remote peer description.

JSEP supports ICE Candidate Trickling , which allows the caller to gradually present candidates to the callee after initializing the offer, and for the callee to begin taking action on the call and establishing the connection without having to wait for all candidates to arrive.

Code WebRTC for signaling 

The following code snippet is a W3C code example that outlines the complete signaling process. This code assumes that there is some signaling mechanism SignalingChannel. Signaling is discussed in detail later.

// handles JSON.stringify/parse
const signaling = new SignalingChannel();
const constraints = {audio: true, video: true};
const configuration = {iceServers: [{urls: 'stuns:stun.example.org'}]};
const pc = new RTCPeerConnection(configuration);

// Send any ice candidates to the other peer.
pc.onicecandidate = ({candidate}) => signaling.send({candidate});

// Let the "negotiationneeded" event trigger offer generation.
pc.onnegotiationneeded = async () => {
  try {
    await pc.setLocalDescription(await pc.createOffer());
    // send the offer to the other peer
    signaling.send({desc: pc.localDescription});
  } catch (err) {
    console.error(err);
  }
};

// After remote track media arrives, show it in remote video element.
pc.ontrack = (event) => {
  // Don't set srcObject again if it is already set.
  if (remoteView.srcObject) return;
  remoteView.srcObject = event.streams[0];
};

// Call start() to initiate.
async function start() {
  try {
    // Get local stream, show it in self-view, and add it to be sent.
    const stream =
      await navigator.mediaDevices.getUserMedia(constraints);
    stream.getTracks().forEach((track) =>
      pc.addTrack(track, stream));
    selfView.srcObject = stream;
  } catch (err) {
    console.error(err);
  }
}

signaling.onmessage = async ({desc, candidate}) => {
  try {
    if (desc) {
      // If you get an offer, you need to reply with an answer.
      if (desc.type === 'offer') {
        await pc.setRemoteDescription(desc);
        const stream =
          await navigator.mediaDevices.getUserMedia(constraints);
        stream.getTracks().forEach((track) =>
          pc.addTrack(track, stream));
        await pc.setLocalDescription(await pc.createAnswer());
        signaling.send({desc: pc.localDescription});
      } else if (desc.type === 'answer') {
        await pc.setRemoteDescription(desc);
      } else {
        console.log('Unsupported SDP type.');
      }
    } else if (candidate) {
      await pc.addIceCandidate(candidate);
    }
  } catch (err) {
    console.error(err);
  }
};

To see the offer/answer and candidate exchange process in action, you can scroll through  simpl.info RTCPeerConnection  and view the single-page video chat example in the console log. If you need more functionality, download a full dump of WebRTC signals and statistics from Google Chrome's chrome://webrtc-internals page or Opera's Opera://webrtc-internals page.

Peer discovery discovers the peer

A weird way to ask: “How do I find someone to talk to?”

For phone calls, you have the phone number and the directory. For online video chat and messaging, you need an identity and presence management system, as well as a way for users to initiate sessions. WebRTC applications need a way for clients to signal each other that they want to start or join a call. The peer discovery mechanism is not defined by WebRTC and will not be introduced here. The process can be as simple as emailing or passing the URL. For example,  Talkytawk.to  and  Browser Meeting , you can invite others to join the call by sharing a custom link. Developer Chris Ball has built an interesting serverless Webrtc experiment,   serverless-webrtc  , that allows WebRTC call participants to exchange metadata via any messaging service they like, such as IM, email, or Homing Pigeon.

 

How can you build a signaling service? How can you build a signaling service?

To reiterate, the WebRTC standard does not define signaling protocols and related mechanisms. Whatever you choose, you will need an intermediary server to exchange signaling messages and application data between clients. Sadly, an Internet app can't simply shout to the Internet, "Connect me to my friends!" Thankfully, the signaling messages are small and mostly exchanged at the beginning of the call. While testing a video chat session using appr.tc  , the signaling service processed around 30-45 messages, with the total size of all messages being around 10KB. In addition to being relatively inexpensive in terms of bandwidth, WebRTC signaling services do not consume much processing or memory because they only need to relay messages and retain a small amount of session state data (such as which clients are connected).

TIPS: The signaling mechanisms used to exchange session metadata can also be used to communicate application data. It's just a messaging service!

Push messages from server to client

 Message services used for signaling must be bidirectional, client to server and server to client. Bidirectional communication violates the HTTP client/server request/response model, however, various hacks have been developed over the years in order to push data from a service running on a web server to a web application running in a browser, such as long polling  long polling  .

Recently, EventSource APIs  have been widely implemented. This will make it possible to send events/data via HTTP server to the browser client. EventSource is designed for one-way messaging, but it can be used in conjunction with XHR to build services for exchanging signaling messages. The signaling service passes the message passed through the XHR request and pushes it to the called party through the EventSource, thereby delivering the message from the calling party. (Original text: A signaling service passes a message from a caller, delivered by XHR request, by pushing it through  EventSource to the callee.)

WebSocket  is a more natural solution designed for full-duplex communication between client and server with messages that can flow in both directions at the same time. One advantage of signaling services built using pure WebSockets or server-sent events (EventSource) is that the backends for these APIs can be implemented on a variety of web frameworks common to most web hosting packages, which are available for PHP , languages ​​like Python and Ruby. All modern browsers except Opera Mini support WebSocket, and more importantly, all browsers that support WebRTC also support WebSocket, both on desktop and mobile devices. Even after a session is established, peers need to poll for signaling messages in the event that another peer changes or terminates the session. The WebRTC Book  application sample adopts this option with some optimizations for polling frequency.

Scale signaling

Although the signaling server consumes relatively little bandwidth and CPU per client, the signaling server for popular applications may have to handle a large number of messages from different locations at high concurrency levels. Traffic-heavy WebRTC applications require signaling servers that can handle the heavy load. Not going into detail here, but for a high-volume, high-performance messaging service, there are many options, including:

  • eXtensible Messaging and Presence Protocol  (XMPP), originally called Jabber - a protocol developed for instant messaging and can be used to send signals (server implementations include  ejabberd  and  Openfire . JavaScript clients, such as  Strophe.js , use  BOSH  to simulate bidirectional flow, but For various reasons , BOSH may not be as efficient as WebSocket, and may not scale well for the same reasons.) (In the other direction,  Jingle is an XMPP extension for enabling voice and video. The WebRTC project uses the libjingle library ( Network and transport components in Jingle's C++ implementation.)
  • Open source libraries such as  ZeroMQ  and  OpenMQ  ( NullMQ  applies ZeroMQ concepts to the web platform using the STOMP protocol  over WebSocket )
  • Commercial cloud messaging platforms that use WebSocket (although they may fall back on long polling mechanisms) such as  PusherKaazing , and  PubNub  (PubNub also has an  API for WebRTC .)
  • Commercial WebRTC platforms such as  vLine

(Developer Phil Leggetter's Real -Time Web Technologies Guide  provides a complete list of messaging services and libraries.)

 

Building signaling services using Socket.io on Node

Below is the code for a simple web application that uses a signaling service built using Socket.io on Node. Socket.io is designed to make it simple to build services to exchange messages, and Socket.io is particularly suitable for WebRTC signaling due to its built-in room concept. This example is not intended to scale to a production-level signaling service, but it is the easiest to understand for a relatively small number of user cases. Socket.io uses WebSocket for fallbacks: AJAX long polling, AJAX multi-streaming, and JSONP polling. It has been ported to various backends, but is probably best known for the version of Node used in this example.

There is no WebRTC in this example. It is only used to show how to build signals in a web application. Check the console log to see what happens when clients join the room and exchange messages. The WebRTC codelab  provides step-by-step instructions on how to integrate it into a complete WebRTC video chat application.

This is the code for the client index.html page:

<!DOCTYPE html>
<html>
  <head>
    <title>WebRTC client</title>
  </head>
  <body>
    <script src='/socket.io/socket.io.js'></script>
    <script src='js/main.js'></script>
  </body>
</html>

This is the JavaScript file referenced in the client, main.js:

const isInitiator;
room = prompt('Enter room name:');
const socket = io.connect();
if (room !== '') {
  console.log('Joining room ' + room);
  socket.emit('create or join', room);
}

socket.on('full', (room) => {
  console.log('Room ' + room + ' is full');
});
socket.on('empty', (room) => {
  isInitiator = true;
  console.log('Room ' + room + ' is empty');
});
socket.on('join', (room) => {
  console.log('Making request to join room ' + room);
  console.log('You are the initiator!');
});
socket.on('log', (array) => {
  console.log.apply(console, array);
});

Here is the complete server application:

const static = require('node-static');
const http = require('http');
const file = new(static.Server)();
const app = http.createServer(function (req, res) {
  file.serve(req, res);
}).listen(2013);

const io = require('socket.io').listen(app);

io.sockets.on('connection', (socket) => {

  // Convenience function to log server messages to the client
  function log(){
    const array = ['>>> Message from server: '];
    for (const i = 0; i < arguments.length; i++) {
      array.push(arguments[i]);
    }
      socket.emit('log', array);
  }

  socket.on('message', (message) => {
    log('Got message:', message);
    // For a real app, would be room only (not broadcast)
    socket.broadcast.emit('message', message);
  });

  socket.on('create or join', (room) => {
    const numClients = io.sockets.clients(room).length;

    log('Room ' + room + ' has ' + numClients + ' client(s)');
    log('Request to create or join room ' + room);

    if (numClients === 0){
      socket.join(room);
      socket.emit('created', room);
    } else if (numClients === 1) {
      io.sockets.in(room).emit('join', room);
      socket.join(room);
      socket.emit('joined', room);
    } else { // max two clients
      socket.emit('full', room);
    }
    socket.emit('emit(): client ' + socket.id +
      ' joined room ' + room);
    socket.broadcast.emit('broadcast(): client ' + socket.id +
      ' joined room ' + room);
  });

});

(You don't need to learn node-static for this. It's only used in this example.) To run this application on your local machine, you need to install Node, Socket.IO, and node-static. Node can be downloaded from   Node.js   (installation is quick and easy). To install Socket.IO and Node Static, run the Node Package Manager from the terminal in your application directory:

npm install socket.io
npm install node-static

To start the server, run the following command from a terminal in your app directory:

node server.js

In your browser, open localhost:2013 Open a new tab or window in any browser and open localhost:2013 again. To see what's going on, check the console. In Chrome and Opera, you can use Ctrl + Shift + J (Command + Option + J on Mac) to open the Developer Tools Access Console. Regardless of which signaling method is chosen, the backend and client applications will (at least) need to provide services similar to this example.

Signaling gotchas Signaling gotchas

  • RTCPeerConnection does not start collecting candidates until setLocalDescription() is called. This is  specified in the JSEP IETF draft .
  • Take advantage of Trickle ICE. Called as soon as possible after receiving candidates addIceCandidate()。(原文 Take advantage of Trickle ICE. Call addIceCandidate as soon as candidates arrive.)

Readymade signaling servers Readymade signaling servers

If you don't want to do it yourself, here are several ready-made WebRTC signaling servers that use Socket.IO like the example above and integrate with the WebRTC client JavaScript library:

  • webRTC.io  is one of the first abstraction libraries for WebRTC.
  • Signalmaster  is a signaling server created for use with the SimpleWebRTC JavaScript client library.

If you don't want to write any code at all, there are complete commercial WebRTC platforms available from  companies such as vLineOpenTok , and  Asterisk .

For the record, Ericsson built a Signal server on Apache using Apache in the early days of WebRTC. This is outdated now, but if you're considering something similar it's worth taking a look at the code.

 

After signaling: Using ICE to deal with NAT and firewalls

For metadata signaling, WebRTC applications use an intermediary server, but for actual media and data streaming after the session is established, RTCPeerConnection attempts to connect directly to the client or peer-to-peer connection. In a simpler environment, each WebRTC endpoint would have a unique address that could be exchanged with other peers for direct communication.

In reality, however, most devices are behind one or more layers of NAT, some have antivirus software that blocks certain ports and protocols, and many are behind proxies and corporate firewalls. In fact, firewall and NAT can be implemented by the same device (such as a home WIFI router). (As shown below)

WebRTC applications can use the ICE framework to overcome the complexities of real-world networks. In order to do this, as described in this article, your app must pass the ICE server URL to RTCPeerConnection. ICE attempts to find the best path to connect peers. It tries all possibilities in parallel and chooses the most efficient option. ICE first attempts to connect using the host address obtained from the device's operating system and network card. If that fails (that is, those devices behind NAT), a STUN server is used to obtain the external address, and if that fails, the traffic is routed through a TURN relay server. In other words, if the direct (peer-to-peer) connection fails, a STUN server is used to obtain the external network address, and a TURN server is used to relay the forwarded traffic.

Every TURN server supports STUN. A TURN server can be understood as a STUN server with additional built-in forwarding capabilities. ICE can also handle the complexities of NAT setup. In fact, NAT hole punching may require more than just a public IP:port address. (You also need the stun server configuration information mentioned below)

The URL of the STUN and/or TURN server is (optionally) specified by the WebRTC application in the iceServers configuration object, which is the first parameter of the RTCPeerConnection constructor. For appr.tc, the value looks like this:

{
  'iceServers': [
    {
      'urls': 'stun:stun.l.google.com:19302'
    },
    {
      'urls': 'turn:192.158.29.39:3478?transport=udp',
      'credential': 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
      'username': '28224511:1379330808'
    },
    {
      'urls': 'turn:192.158.29.39:3478?transport=tcp',
      'credential': 'JZEOEt2V3Qb0y27GRntt2u2PAYA=',
      'username': '28224511:1379330808'
    }
  ]
}

Note: The TURN voucher example is time-limited and expired in September 2013. TURN servers are expensive to run and you need to buy your own server or find a service provider. To test the credentials you can use candidate gathering sample and  check if you got a candidate of type Relay.

Once the RTCPeerConnection has that information, the ICE framework automatically starts the magic. RTCPeerConnection uses the ICE framework to find the best path between peers and uses STUN and TURN servers when necessary.

STUN

NAT provides an IP address to the device for use within a private LAN, but this address cannot be used externally. Without a public address, WebRTC peers cannot communicate. To solve this problem, WebRTC uses STUN server.

The STUN server is on the public internet and has a simple task - check the IP:port address of the incoming request (from an application running behind a NAT) and send back that address in response. In other words, the application uses a STUN server to discover its IP:port from a public perspective. Through this process, a WebRTC peer obtains its own publicly accessible address, which is then passed to another peer via a signaling mechanism to establish a direct link. (Actually, different NATs work in different ways, and there may be multiple NAT layers, but the principle is still the same). A STUN server doesn't have to do a lot or remember a lot, so even a lower-spec STUN server can handle a lot of requests.

TURN

RTCPeerConnection attempts to establish direct communication between peers via UDP. If that fails, RTCPeerConnection resorts to TCP. If this fails, a TURN server can be used as a fallback, relaying data between endpoints. To clarify, TURN is used to relay audio, video and data streams between peers, not to send signaling data!

TURN servers have public addresses so peers can contact them even if they are behind a firewall or proxy. TURN servers are conceptually simple—relay streams. However, unlike STUN servers, they inherently consume a lot of bandwidth. In other words, the TURN server requires more powerful network resources.

This diagram shows TURN in action. Pure STUN failed, so each peer uses a TURN server.

 

Deploy STUN and TURN servers

For testing, Google ran a  public STUN server stun.l.google.com:19302 used by appr.tc. For STUN/TURN services in production environments, use rfc5766-turn-server The source code for the STUN and TURN servers is available on  GitHub , where you can also find links to several sources of information about server installation. VM image for Amazon Web Services  is also available.

The alternate TURN server is called restund and is available as  source code  and also for AWS. Below are instructions on how to set up Restund on Compute Engine.

  1. Turn on firewall for tcp=443, udp/tcp=3478 as needed.
  2. Create four instances, one for each public IP, standard Ubuntu 12.06 image.
  3. Set the local firewall configuration (any in, allow any out).
  4. Installation tools:
    sudo apt-get install make
    sudo apt-get install gcc
  5. Install libre from  creytiv.com/re.html .
  6. Get restund from  creytiv.com/restund.html  and unzip it.
  7. wget hancke.name/restund-auth.patch  and then apply for patching  patch -p1 < restund-auth.patch.
  8. Run  make, sudo make install for libre and restund.
  9. Adapt restund.conf to your needs (replace IP addresses and make sure it contains the same shared secret) and copy to /etc.
  10. Copy  restund/etc/restund to  /etc/init.d/.
  11. 配置 restund:
    a. Set LD_LIBRARY_PATH.
    b. Copy restund.conf to /etc/restund.conf.
    c. Set restund.conf to use the right 10. IP address.
  12. Run restund
  13. Test using the stund client on the remote machine: ./client IP:port

 

Beyond One-to-One: Multiparty WebRTC Multiparty WebRTC

You may also want to take a look at the IETF standard for REST API for access to TURN Services proposed by Justin Uberti, REST API for access to TURN Services . One can easily imagine media streaming use cases for one-to-many calls. For example, a video conference between a group of colleagues or a public event between a speaker and millions of viewers.

WebRTC applications can use multiple RTCPeerConnections so that each endpoint can connect to every other endpoint in the mesh configuration. This was the approach taken by apps like talky.io, and it worked very well for a handful of contemporaries. Beyond that, the processing and bandwidth consumption becomes excessive, especially for mobile clients. Alternatively, a WebRTC application can select one endpoint to distribute the stream to all other endpoints in a star configuration. It is also possible to run a WebRTC endpoint on the server and build your own redistribution mechanism (webrtc.org provides a   sample client app  ).

Starting with Chrome 31 and Opera 18, a MediaStream from the other end of an RTCPeerConnection can be used as a new input. This enables a more flexible architecture as it allows web applications to handle call routing by selecting additional peers to connect to. To see it in action, check out  WebRTC samples Peer connection relay  and  WebRTC samples Multiple peer connections .

Multipoint Control UnitMultipoint Control Unit

For a large number of endpoints, a better option is to use a multipoint control unit (MCU). This is a server that acts as a bridge to distribute media among a large number of participants. The MCU can handle different resolutions, codecs and frame rates in video conferencing; handle transcoding; do selective stream forwarding; and mix or record audio and video. With multi-party calls, there's a lot to consider, especially how to display multiple video inputs and mix audio from multiple sources. Cloud platforms such as vLine  also try to optimize traffic routing.

You can buy a complete MCU hardware kit or build your own. Several open source MCU software options are available. For example, Licode (formerly Lynckia) produces an open source MCU for WebRTC. 

Beyond the browser: VoIP, calling and messaging

The standardized nature of WebRTC makes it possible to establish communication between a WebRTC application running in a browser and a device or platform running on another communication platform, such as a phone or video conferencing system. SIP  is the signaling protocol used by VoIP and video conferencing systems. In order to enable communication between WebRTC web applications and SIP clients (such as video conferencing systems), WebRTC requires a proxy server to mediate signaling. Signaling must flow through the gateway, but once communication is established, SRTP traffic (video and audio) can flow directly peer-to-peer.

The Public Switched  Telephone Network (PSTN)  is the circuit-switched network for all older desktop analog telephones. For calls between the WebRTC web application and the phone, the traffic must go through the PSTN gateway. Likewise, WebRTC web applications require an intermediate XMPP server to communicate with Jingle   endpoints (such as IM clients). Developed by Google, Jingle is an extension to XMPP that enables voice and video for messaging services. The current WebRTC implementation is based on the C++ libjingle  library, which is an implementation of Jingle originally developed for Talk.

Many applications, libraries and platforms take advantage of WebRTC's ability to communicate with the outside world:

  • sipML5 : Open source JavaScript SIP client
  • jsSIP : JavaScript library for SIP
  • Phono : An open source JavaScript telephony API built as a plugin
  • Zingaya : Embedded phone widget
  • Twilio : Voice and messaging
  • Uberconference : conference

The sipML5 developers also built  the webrtc2sip  gateway.  Tethr and Tropo have demonstrated a framework  for disaster communications using OpenBTS cells "in a briefcase" to enable functional phones and computers. Communication between them can be done through WebRTC. That’s phone communication without a carrier!

Find out more

The WebRTC codelab provides step-by-step instructions for how to build a video and text chat app using a Socket.io signaling service running on Node.

Google I/O WebRTC presentation from 2013 with WebRTC tech lead, Justin Uberti

Chris Wilson's SFHTML5 presentation—Introduction to WebRTC Apps

The 350-page book WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web provides a lot of detail about data and signaling pathways, and includes a number of detailed network topology diagrams.

WebRTC and Signaling: What Two Years Has Taught Us—TokBox blog post about why leaving signaling out of the spec was a good idea

Ben Strong's A Practical Guide to Building WebRTC Apps provides a lot of information about WebRTC topologies and infrastructure.

The WebRTC chapter in Ilya Grigorik's High Performance Browser Networking goes deep into WebRTC architecture, use cases, and performance.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Guess you like

Origin blog.csdn.net/a360940265a/article/details/114648590