(Transfer) Introduction to Android WebRTC

Source address: Introduction to Android WebRTC_CharonChui's Blog-CSDN Blog_android webrtc

Introduction to Android WebRTC


Introduction to
WebRTCWebRTC

The name WebRTC is derived from the abbreviation of Web Real-Time Communication (Web Real-Time Communication), which is a technology that supports web browsers for real-time voice conversations or video conversations. technology. Google's open-source instant messaging project on June 3, 2011 aims to make it a standard for client-side video calls. In fact, before Google open-sourced WebRTC, Microsoft and Apple's respective communication products had taken up a large market share (such as Skype). Google` also made it open-source in order to rapidly expand the market. It has been widely supported and applied in the industry and has become the standard for next-generation video calls. More introductions can be found on the official website.

WebRTC is hailed as a new start for the long-term open source development of the web, and is the most important innovation in web development in recent years. WebRTC allows web developers to add video chat or peer-to-peer data transfers to their web applications without complex coding or expensive configuration. Currently supporting Chrome, Firefox, and Opera, with more browsers to come, it has the ability to reach billions of devices.

However, WebRTC has been misunderstood as only suitable for browsers. In fact, one of the most important features of WebRTC is to allow interoperability between native and web applications, which is rarely used by anyone.

This article mainly takes the open source project AndroidRTC as an example.
After downloading, open it through Android Studio. When opening it, an error may be reported, saying that org.json:json:20090211 cannot be found. At this time, just set the AndroidRTC/webrtc-client/build.gradle in

dependencies {     compile ('com.github.nkzawa:socket.io-client:0.4.1')     compile 'io.pristine:libjingle:8871@aar' } modified to




dependencies {     compile ('com.github.nkzawa:socket.io-client:0.4.1'){ // //webSocket related         exclude group: 'org.json', module: 'json' // This sentence is my Newly added, otherwise it will prompt org.json:json:20090211 not found     }


    compile 'io.pristine:libjingle:8871@aar' // //webRTC official aar package
}

will do.

After running it, you will find that there will be a black screen when entering, because it needs to rely on the node.js service.

WebRTC Live Streaming:

Node.js server
Desktop client
Android client
If nodejs is not installed on the computer, you need to install nodejs first, because you need to rely on node.js, so we need to install node.js first. Go to nodejs official website to download and install.

Then download the ProjectRTC project locally:
- git clone https://github.com/pchab/ProjectRTC.git
- cd ProjectRTC/
- npm install
- npm start
service will run on port 3000 by default, you can open localhost in your browser :3000.

The figure shows that the operation was successful.

Next, we open the android client, and you will find that the screen is still black.
This is because we need to modify the configuration in strings.xml in the project to the ip address of the local computer in the LAN and the port running the Node server. as follows:

<resources>
    <string name="app_name">AndroidRTC</string>
    <string name="host">172.16.55.27</string>
    <string name="port">3000</string>
    <string name="action_settings ">Options</string>
</resources>

Okay, now run the android project again, then open the browser and enter localhost:3000, click start on the browser page, you can see the screen captured by the computer camera, as shown below:

Then click the call on the left to apply for connecting to the mobile terminal, and then you can see the screen in the browser and mobile terminal, as shown below:

Screen on mobile phone:

Now that it is connected, let's analyze the code carefully:
there are relatively few code files, mainly three categories:

Among them, the logic implemented in WebRtcClient is the core, and we will start with it.

Android-related APIs include VideoCapturerAndroid, VideoRenderer, MediaStream, PeerConnection,
and PeerConnectionFactory. Below we will explain one by one.

Before starting, you need to create PeerConnectionFactory, which is the core API for using WebRTC on Android.

PeerConnectionFactory
Android WebRTC core class. Understanding this class and understanding how it creates anything else is the key to gaining insight into WebRTC in Android.
It still wasn't the way we expected it to be, so we started digging into it.

We first go to the WebRtcClient file to find the place where PeerConnectionFactory is initialized:

The parameters of initializeAndroidGlobals are:
- context: application context
- initializeAudio: whether to initialize the audio Boolean value.
- initializeVideo: Boolean whether to initialize the video. Skipping these two allows to skip requesting API-related permissions, such as data channel applications.
- videoCodecHwAcceleration: Boolean whether to allow hardware acceleration.
- renderEGLContext: used to provide support for hardware video decoding, and can create a shared EGL context in the video decoding thread.
Can be empty - in this example hardware video decoding will generate yuv420 frames instead of texture frames.
- initializeAndroidGlobals also returns a Boolean value, true means everything is OK, false means there is a failure.

With a peerConnectionFactory instance, video and audio can be obtained from the user device and finally rendered to the screen. In Android, we need to understand VideoCapturerAndroid, VideoSource, VideoTrack and VideoRenderer.
Start with VideoCapturerAndroid:

VideoCapturerAndroid
VideoCapturerAndroid is actually a package of a series of Camera APIs, which provides convenience for accessing stream information of camera devices. It allows to get multiple camera device information, including front camera, or rear camera.

The code in the WebRtcClient class is:

private void setCamera(){
    localMS = factory.createLocalMediaStream("ARDAMS");
    if(pcParams.videoCallEnabled){
        MediaConstraints videoConstraints = new MediaConstraints();
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxHeight", Integer.toString(pcParams.videoHeight)));
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxWidth", Integer.toString(pcParams.videoWidth)));
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxFrameRate", Integer.toString(pcParams.videoFps)));
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("minFrameRate", Integer.toString(pcParams.videoFps)));

        videoSource = factory.createVideoSource(getVideoCapturer(), videoConstraints);
        localMS.addTrack(factory.createVideoTrack("ARDAMSv0", videoSource));
    }

    AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
    localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));

    mListener.onLocalStream(localMS);
}

private VideoCapturer getVideoCapturer() {     // Get the name of the front camera     String frontCameraDeviceName = VideoCapturerAndroid.getNameOfFrontFacingDevice();     // Create a front camera object return VideoCapturerAndroid.create     (frontCameraDeviceName) ; A MediaStream containing video stream information obtained from the local device can be created and sent to the other end. But before doing this, let's first study how to display our own video on the application.






VideoSource/VideoTrack
gets some useful information from the VideoCapturer instance, or to achieve the ultimate goal - get the appropriate media stream for the connection, or just render it to the user, we need to understand the VideoSource and VideoTrack classes.

VideoSource allows methods to start and stop device capturing video. This is useful if video capture is disabled to preserve battery life.
VideoTrack is a wrapper that simply adds a VideoSource to a MediaStream object.

Let's see how they work together in code. capturer is an instance of VideoCapturer, and videoConstraints is an instance of MediaConstraints.
The code is still in the setCamera() method posted above:

private void setCamera(){
    localMS = factory.createLocalMediaStream("ARDAMS");
    if(pcParams.videoCallEnabled){
        MediaConstraints videoConstraints = new MediaConstraints();
        ......
        // 创建VideoSource
        videoSource = factory.createVideoSource(getVideoCapturer(), videoConstraints);
        // 创建VideoTrack
        localMS.addTrack(factory.createVideoTrack("ARDAMSv0", videoSource));
    }
    // 创建AudioSource
    AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
    // 创建AudioSource
    localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));

    mListener.onLocalStream(localMS);
}

AudioSource/AudioTrack
AudioSource and AudioTrack are similar to VideoSource and VideoTrack, except that AudioCapturer is not needed to obtain the microphone, and
the code is the same as above.

VideoRenderer
into VideoRenderer, the WebRTC library allows to implement your own rendering via VideoRenderer.Callbacks. Also,
it provides a very nice default way VideoRendererGui. In short, VideoRendererGui is a GLSurfaceView with which you can draw your own video stream. Let's take a look at how it works in code, and how to add a renderer to VideoTrack.
Because it is a rendering class, the code is not in WebRtcClient. In RtcActivity, let's look at the code:

// Local preview screen position before call is connected.
private static final int LOCAL_X_CONNECTING = 0;
private static final int LOCAL_Y_CONNECTING = 0;
private static final int LOCAL_WIDTH_CONNECTING = 100;
private static final int LOCAL_HEIGHT_CONNECTING = 100;
// Local preview screen position after call is connected.
private static final int LOCAL_X_CONNECTED = 72;
private static final int LOCAL_Y_CONNECTED = 72;
private static final int LOCAL_WIDTH_CONNECTED = 25;
private static final int LOCAL_HEIGHT_CONNECTED = 25;
// Remote video screen position
private static final int REMOTE_X = 0;
private static final int REMOTE_Y = 0;
private static final int REMOTE_WIDTH = 100;
private static final int REMOTE_HEIGHT = 100;

private VideoRendererGui.ScalingType scalingType = VideoRendererGui.ScalingType.SCALE_ASPECT_FILL;
private GLSurfaceView vsv;
private VideoRenderer.Callbacks localRender;
private VideoRenderer.Callbacks remoteRender;
private WebRtcClient client;
private String mSocketAddress;
private String callerId;

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    requestWindowFeature(Window.FEATURE_NO_TITLE);
    getWindow().addFlags(
            LayoutParams.FLAG_FULLSCREEN
                    | LayoutParams.FLAG_KEEP_SCREEN_ON
                    | LayoutParams.FLAG_DISMISS_KEYGUARD
                    | LayoutParams.FLAG_SHOW_WHEN_LOCKED
                    | LayoutParams.FLAG_TURN_SCREEN_ON);
    setContentView(R.layout.main);
    mSocketAddress = "http://" + getResources().getString(R.string.host);
    mSocketAddress += (":" + getResources().getString(R.string.port) + "/");

    // 获取GLSurfaceView 
    vsv = (GLSurfaceView) findViewById(R.id.glview_call);
    vsv.setPreserveEGLContextOnPause(true);
    vsv.setKeepScreenOn(true);
    // 设置给VideoRendererGui
    VideoRendererGui.setView(vsv, new Runnable() {
        @Override
        public void run() {
            init();
        }
    });

    // 创建本地和远程的Render
    // local and remote render
    remoteRender = VideoRendererGui.create(
            REMOTE_X, REMOTE_Y,
            REMOTE_WIDTH, REMOTE_HEIGHT, scalingType, false);
    localRender = VideoRendererGui.create(
            LOCAL_X_CONNECTING, LOCAL_Y_CONNECTING,
            LOCAL_WIDTH_CONNECTING, LOCAL_HEIGHT_CONNECTING, scalingType, true);

    final Intent intent = getIntent();
    final String action = intent.getAction();

    if (Intent.ACTION_VIEW.equals(action)) {
        final List<String> segments = intent.getData().getPathSegments();
        callerId = segments.get(0);
    }
}

......


public void onLocalStream(MediaStream localStream) {
    // 将Render添加到VideoTrack中
    localStream.videoTracks.get(0).addRenderer(new VideoRenderer(localRender));
    VideoRendererGui.update(localRender,
            LOCAL_X_CONNECTING, LOCAL_Y_CONNECTING,
            LOCAL_WIDTH_CONNECTING, LOCAL_HEIGHT_CONNECTING,
            scalingType);
}

@Override
public void onAddRemoteStream(MediaStream remoteStream, int endPoint) {
    remoteStream.videoTracks.get(0).addRenderer(new VideoRenderer(remoteRender));
    VideoRendererGui.update(remoteRender,
            REMOTE_X, REMOTE_Y,
            REMOTE_WIDTH, REMOTE_HEIGHT, scalingType);
    VideoRendererGui.update(localRender,
            LOCAL_X_CONNECTED, LOCAL_Y_CONNECTED,
            LOCAL_WIDTH_CONNECTED, LOCAL_HEIGHT_CONNECTED,
            scalingType);
}

@Override
public void onRemoveRemoteStream(int endPoint) {     VideoRendererGui.update(localRender,             LOCAL_X_CONNECTING, LOCAL_Y_CONNECTING,             LOCAL_WIDTH_CONNECTING, LOCAL_HEIGHT_CONNECTING,             scalingType); } MediaConstraints MediaConstraints is WebRTC that supports different constraints Library-style classes that can load audio and video into a MediaStream track. For most methods that require MediaConstraints, a simple MediaConstraints instance will do.







private MediaConstraints pcConstraints = new MediaConstraints();
......
pcConstraints.mandatory.add(new MediaConstraints.KeyValuePair("OfferToReceiveAudio", "true"));
pcConstraints.mandatory.add(new MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"));
pcConstraints.optional.add(new MediaConstraints.KeyValuePair("DtlsSrtpKeyAgreement", "true"));

......
this.pc = factory.createPeerConnection(iceServers, pcConstraints, this); After

MediaStream sees itself locally, it must find a way to let the other party see itself.
We need to create the MediaStream ourselves.
Next, we will study how to add local VideoTrack and AudioTrack to create a suitable MediaStream.
The code is still in the setCamera() method analyzed earlier:

private MediaStream localMS;

private void setCamera(){
    // 创建MediaStream
    localMS = factory.createLocalMediaStream("ARDAMS");
    if(pcParams.videoCallEnabled){
        MediaConstraints videoConstraints = new MediaConstraints();
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxHeight", Integer.toString(pcParams.videoHeight)));
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxWidth", Integer.toString(pcParams.videoWidth)));
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("maxFrameRate", Integer.toString(pcParams.videoFps)));
        videoConstraints.mandatory.add(new MediaConstraints.KeyValuePair("minFrameRate", Integer.toString(pcParams.videoFps)));

        videoSource = factory.createVideoSource(getVideoCapturer(), videoConstraints);
        // 添加VideoTrack
        localMS.addTrack(factory.createVideoTrack("ARDAMSv0", videoSource));
    }

    AudioSource audioSource = factory.createAudioSource(new MediaConstraints());
    // 添加AudioTrack
    localMS.addTrack(factory.createAudioTrack("ARDAMSa0", audioSource));

    mListener.onLocalStream(localMS);
}

We now have a MediaStream instance containing a video stream and an audio stream, and our preview screen is displayed on the screen. The next thing to do is to send this information to the other party.
This article will not introduce how to build your own signal flow, we will directly introduce the corresponding API methods and how they are associated with the web. AppRTC uses autobahn to make WebSocket connections to the signal.

PeerConnection
Now that we have our own MediaStream, we can start connecting to the remote end. Creating a PeerConnection is very simple, only need the assistance of PeerConnectionFactory.

private PeerConnection pc;
private PeerConnectionFactory factory;
private LinkedList<PeerConnection.IceServer> iceServers = new LinkedList<>();


iceServers.add(new PeerConnection.IceServer("stun:23.21.150.121"));
iceServers.add(new PeerConnection.IceServer("stun:stun.l.google.com:19302"));

this.pc = factory.createPeerConnection(iceServers, pcConstraints, this);

The functions of the parameters are as follows:

iceServers: This parameter is required when connecting to external devices or networks. Adding STUN and TURN servers here allows connections even under poor network conditions.
constraints: The instance of MediaConstraints introduced earlier should include offerToRecieveAudio and offerToRecieveVideo.
observer: PeerConnection.Observer instance.

PeerConnection is very similar to the corresponding API on the web, including addStream, addIceCandidate, createOffer, createAnswer, getLocalDescription, setRemoteDescription and other similar methods. In order to understand how to coordinate all work to establish a communication channel between two points, let's first look at the following classes:

addStream
This is used to add MediaStream to PeerConnection, just like its name. If you want the other party to see your video and hear your voice, you need to use this method.

addIceCandidate
Once the internal IceFramework finds candidates that allow other parties to connect to you, it will create IceCandidates. When passing data to the other party through PeerConnectionObserver.onIceCandidate, you need to get the IceCandidates of the other party through any signal channel you choose. Use addIceCandidate to add them to PeerConnection so that PeerConnection can try to connect to each other with existing information.

The two methods createOffer/createAnswer
are used to establish the original call. In WebRTC, there are already the concepts of caller and callee, one is calling and the other is answering. createOffer is used by the caller, it needs a sdpObserver, which allows to obtain and transmit the session description protocol Session Description Protocol (SDP) to the other party, and also needs a MediaConstraint. Once the peer gets the request, it creates a response and transmits it to the caller. SDP is used to describe data in the desired format (such as video, formats, codecs, encryption, resolution, size, etc.) to the other party. Once the caller receives this response message, the two parties have reached an agreement on the communication requirements for mutual establishment, such as video, audio, decoder, etc.

setLocalDescription/setRemoteDescription
is used to set the SDP data generated by createOffer and createAnswer, including the data obtained from the remote end. It allows the internal PeerConnection to configure the link so that the real work can begin once audio and video transfers begin.

The PeerConnection.Observer
interface provides a way to monitor PeerConnection events, such as when MediaStream is received, or when iceCandidates are found, or when communication needs to be re-established. This interface must be implemented so that you can efficiently handle incoming events, such as signaling iceCandidates to the counterparty when they become visible.

WebRTC opens up human-to-human communication, free for developers and free for end users. It not only provides video chat, but also other applications, such as health services, low-latency file transfers, torrent downloads, and even gaming applications.
To see a real WebRTC` application example, please download appear.in. It works flawlessly between browsers and native apps, and is free for up to 8 people in the same room. No installation or registration is required.

Finally, to summarize:

Mainstream protocols for live streaming:

RTMP: (Real Time Messaging Protocol) Real Time Messaging Protocol is an open protocol developed by Adobe Systems for audio, video and data transmission between Flash players and servers. At present, 95% of the mainstream live broadcast platforms are based on this protocol.

Advantages: mature, stable, good effect, etc.
Disadvantages: delay, real-time interaction, etc.
WebRTC: The name comes from the abbreviation of Web Real-Time Communication (Web Real-Time Communication), which is a real-time voice dialogue that supports web browsers Or video conversation technology, a technology acquired by Google in 2010 when it bought Global IP Solutions for $68.2 million. In May 2011, the source code of the project was opened, which has been widely supported and applied in the industry, and has become the standard for next-generation video calls.

Advantages: real-time interaction, low development difficulty, etc.
Disadvantages: too real-time, if the network is not good, the video quality will be seriously degraded, and the experience will be bad; postgraduate entrance examination server, etc.
Since WebRTC is so good, it is better to use it directly for live broadcast . The results are often not as perfect as expected.

The overall technology of WebRTC is not suitable for live broadcast. The original intention of WebRTC design is only to solve the direct connection between two browsers/native apps to send media streaming/data data, which is the so-called peer-to-peer communication. In most cases, it does not need to rely on the transfer of the server. Therefore, the logic of communication is generally one-to-one. However, most of our current live broadcast services are one-to-many communication. A host may have thousands of receivers. This method is impossible to implement with traditional P2P, so the current live broadcast solution Basically, there will be a live broadcast server for central management. The data of the anchor is first sent to the live broadcast server. In order to support the viewing of a large number of users, the live broadcast server also uses the edge node CDN to do regional acceleration. The end will not directly connect to the anchor, but receive data from the server.

WebRtc is only suitable for small-scale (within 8 people) audio and video conferences, not suitable for live broadcast:
1. Video part: the vpx encoder is too weak, and 264 cannot be used due to patent reasons. If you do well, you must change the 264/265 code yourself. OK.
2. Audio part: Audio is only suitable for human voice encoding, and the effect on music and other non-human voices is terrible.
3. Network part: The adaptability to all kinds of strange domestic networks is too low, and the network is bad or there are too many people to get stuck.
4. Signal processing: I have compared GIPS and WebRTC at the same time, and I can be sure that the current open source code is castrated by GIPS.
5. Scale of use: Use within 10 people, if more than 10 people will hang up,

Please use nginx rtmp-module to set up the current live broadcast server, and use the ffmpeg command line to test the broadcast camera after setup. The anchor client should use rtmp to stream to rtmp-module, fans should use rtmp/flv+http stream to watch, PC-web fans should use Flash NetStream to watch, mobile web fans should use hls/m3u8 to watch. If your test is successful and you are about to go online and there is pressure, then layer nginx (access layer + exchange layer) and slightly change two lines of code. If the funds are not enough to deploy servers nationwide, then replace nginx-rtmp-module with cdn You can also directly adjust nginx, and use CDN’s live broadcast service from the beginning, such as Wangsu (Douyu’s live broadcast service provider). This is the right way, don't take a detour.

Since WebRTC is not suitable for live broadcast, why write so much, the technical modules contained in WebRTC are very suitable for solving various problems in the live broadcast process, and should have been partially applied in most live broadcast technical frameworks , such as audio and video data sending and receiving, audio processing echo cancellation and noise reduction, etc. So in summary, you can use the technical modules inside WebRTC to solve the technical problems in the live broadcast process, but it is not suitable to directly use WebRTC to realize the overall framework of live broadcast.

reference:

Introduction to WebRTC on Android
Android WebRTC Compile
Android WebRTC open source project
ProjectRTC
Getting Started with WebRTC
WebRTC Github
WebRTC GoogleSource
Android WebRTC
Samples
WebRTC API
Is WebRTC suitable for live broadcast?
————————————————
Copyright statement: This article is an original article of CSDN blogger "CharonChui", following the CC 4.0 BY-SA copyright agreement, please attach the original source link and this statement for reprinting .
Original link: https://blog.csdn.net/Charon_Chui/article/details/80510945

Guess you like

Origin blog.csdn.net/duyiqun/article/details/127106359