webrtc android 源码分析--- 摄像头预览数据显示全过程

这一部分我们会全力讲透彻:

android 从获取摄像头的数据到显示到屏幕上中间所经过的业务逻辑和过程,并且我们还会单独的仿写一个demo来进行实现.

中间的过程中,我们会使用visio的UML类图来进行串讲。

我们主要分析的是java层的业务逻辑,请大家仔细看类之间的关系图.

这里写图片描述

这里写图片描述

CallActivity是我们的通话要显示的界面,摄像机的操作开始位置是在连接上房间服务器成功之后才开始的.

  private void onConnectedToRoomInternal(final SignalingParameters params) {
    final long delta = System.currentTimeMillis() - callStartedTimeMs;

    signalingParameters = params;
    logAndToast("Creating peer connection, delay=" + delta + "ms");
    VideoCapturer videoCapturer = null;
    if (peerConnectionParameters.videoCallEnabled) {
      videoCapturer = createVideoCapturer();
    }
    peerConnectionClient.createPeerConnection(
        localProxyVideoSink, remoteSinks, videoCapturer, signalingParameters);

localProxyVideoSink,和 remoteSink只是VideoSink的代理者,代理的是最终的真正的数据消费者.

因为现在Camera启动有2种方式,所以CameraEnumerator对这种情况进行了封装,这里我们只针对第一种方式进行说明,也就是Camear1Capture来进行说明.

如果我们自己做渲染显示,那么就少不了几个步骤:

1.启动camera
2.camera的数据可以直接绑定一个纹理对象,然后通过 setOnFrameAvailable的方式来进行通知纹理更新
3.在纹理被通知更新,我们再通过opengl shader的方式来把数据显示到surface上.

webrtc的方式依然也是对这三种模式进行的封装.

这里写图片描述

SurfaceTextureHelper是一个handler线程,看下它的成员变量.

  private final Handler handler;
  private final EglBase eglBase;
  private final SurfaceTexture surfaceTexture;
  private final int oesTextureId;
  private final YuvConverter yuvConverter = new YuvConverter();

它内部维护一个handler线程,同时监控纹理的更新,如果纹理更新了,会把纹理数据封装为VideoFrame,然后通过进行回掉.

oesTextureId = GlUtil.generateTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES);
    surfaceTexture = new SurfaceTexture(oesTextureId);
    setOnFrameAvailableListener(surfaceTexture, (SurfaceTexture st) -> {
      hasPendingTexture = true;
      tryDeliverTextureFrame();
    }, handler);

通过listener把VideoFrame作为参数进行回掉,我们可以在外部进行

 /**
   * Start to stream textures to the given |listener|. If you need to change listener, you need to
   * call stopListening() first.
   */
  public void startListening(final VideoSink listener) {
    startListeningInternal(listener);
  }

数据其实是回掉到了Camera1Session中了,我们可以在Camera1Session1的初始化的过程中,看到对应的代码:

 private void listenForTextureFrames() {

    //这里的VideoFrame是回调的时候生成的
    surfaceTextureHelper.startListening((VideoFrame frame) -> {
      checkIsOnCameraThread();

      if (state != SessionState.RUNNING) {
        Logging.d(TAG, "Texture frame captured but camera is no longer running.");
        return;
      }

回掉的写法是lamba语法,

CameraSession主要是控制摄像头的open/close,startpreview操作的,然后通过回掉的方式再把事件传递回CameraCapture.

CameraSession中的2大事件回调接口.

  // Callbacks are fired on the camera thread.
  interface CreateSessionCallback {
    void onDone(CameraSession session);
    void onFailure(FailureType failureType, String error);
  }

  // Events are fired on the camera thread.
  interface Events {
    void onCameraOpening();
    void onCameraError(CameraSession session, String error);
    void onCameraDisconnected(CameraSession session);
    void onCameraClosed(CameraSession session);
    void onFrameCaptured(CameraSession session, VideoFrame frame);
  }

分别在CameraCapture中被实现.

  private final CameraSession.CreateSessionCallback createSessionCallback =
      new CameraSession.CreateSessionCallback() {
 private final CameraSession.Events cameraSessionEventsHandler = new CameraSession.Events() {
    @Override
    public void onCameraOpening() {

所以最终的VideoFrame会被CameraCapture所捕获.

  public void onFrameCaptured(CameraSession session, VideoFrame frame) {
      checkIsOnCameraThread();
      synchronized (stateLock) {
        if (session != currentSession) {
          Logging.w(TAG, "onFrameCaptured from another session.");
          return;
        }
        if (!firstFrameObserved) {
          eventsHandler.onFirstFrameAvailable();
          firstFrameObserved = true;
        }
        cameraStatistics.addFrame();
        capturerObserver.onFrameCaptured(frame);
      }
    }
  };

这个capturerObserver其实是NativeCapturerObserver,NativeCapturerObserver是CameraCapture的成员.

NativeCapturerObserver则直接把数据给丢到了ndk层去处理。

  public void onFrameCaptured(VideoFrame frame) {
    int width = frame.getBuffer().getWidth();
    int height = frame.getBuffer().getHeight();
    int rotation = frame.getRotation();
    long timeNs = frame.getTimestampNs();
    VideoFrame.Buffer buffer = frame.getBuffer() ;

    nativeOnFrameCaptured(nativeSource, width , height , rotation, timeNs , buffer);
  }

ndk对数据进行处理后,会调用上层的VideoSink的onFrame.

这里写图片描述

流程就是按照上面的方式进行处理的.

所以我们很容易知道SurfaceViewRender被传递到了ndk层,给了VideoBroadCast,VideoBroadCast以广播的方式通知所有的sinks,去处理VideoFrame,而SurfaceViewRender就是调用了onFrame来进行处理.

SurfaceViewRender本身继承于SurfaceView,跟踪代码就会发现最终会嗲用opengl 模块.

最终GLRectDrawer把外置纹理绘制到了Surface上.

  public void drawOes(int oesTextureId, float[] texMatrix, int frameWidth, int frameHeight,
      int viewportX, int viewportY, int viewportWidth, int viewportHeight) {
    prepareShader(
        ShaderType.OES, texMatrix, frameWidth, frameHeight, viewportWidth, viewportHeight);
    // Bind the texture.
    GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
    GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, oesTextureId);
    // Draw the texture.
    GLES20.glViewport(viewportX, viewportY, viewportWidth, viewportHeight);
    GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
    // Unbind the texture as a precaution.
    GLES20.glBindTexture(GLES11Ext.GL_TEXTURE_EXTERNAL_OES, 0);
  }

总结 :

这里只是大概的帮大家理清下类之间的关系依赖,类似大厦的设计图纸,后面我们就要分析细节。

要想明白,作者为什么要这么设计,是基于什么目的呢?

猜你喜欢

转载自blog.csdn.net/zhangkai19890929/article/details/82186081