Android Camera帧数据是如何显示到Surfaceview上的

本文对Android Camera帧数据是如何显示到surfaceview上的做初步的研究。

本文的标题是比较通俗的表示方式,比较专业的描述是:
Android Camera如何通过SurfaceView中Surface归还承载帧数据的graphicbuffer到SurfaceFlinger并最终合成显示的。

1. CameraService归还承载帧数据的GraphicBuffer给SurfaceFlinger

Android GraphicBuffer在CameraService、CameraProvider、CameraHAL的申请、传递、归还流程中介绍了,在Camera子系统完成对GraphicBuffer的生产后,最终是通过
Surface::queueBuffer(...)将其归还到SurfaceFlinger中的,代码如下:

//frameworks\av\services\camera\libcameraservice\device3\Camera3OutputStream.cpp
status_t Camera3OutputStream::queueBufferToConsumer(
	sp<ANativeWindow>& consumer,
	ANativeWindowBuffer* buffer, 
	int anwReleaseFence) 
{
    
    
    //consumer为应用传入的surface的父类ANativeWindow
    //buffer为承载相机帧数据的graphicbuffer 
    //anwReleaseFence为graphicbuffer对应fencefd
    return consumer->queueBuffer(consumer.get(), buffer, anwReleaseFence);
}

Surface::queueBuffer(...) 代码如下:

int Surface::queueBuffer(android_native_buffer_t* buffer, int fenceFd) {
    
    
    ...
    //查找buffer是mSlots中的哪个buffer
    int i = getSlotFromBufferLocked(buffer);
    ...
    // Make sure the crop rectangle is entirely inside the buffer.
    Rect crop(Rect::EMPTY_RECT);
    mCrop.intersect(Rect(buffer->width, buffer->height), &crop);
    sp<Fence> fence(fenceFd >= 0 ? new Fence(fenceFd) : Fence::NO_FENCE);
    IGraphicBufferProducer::QueueBufferOutput output;
    IGraphicBufferProducer::QueueBufferInput input(timestamp, isAutoTimestamp,
            mDataSpace, crop, mScalingMode, mTransform ^ mStickyTransform,
            fence, mStickyTransform, mEnableFrameTimestamps);
    ...
    //mGraphicBufferProducer是BpGraphicBufferProducer代理对象对象
    //定义在frameworks\native\libs\gui\IGraphicBufferProducer.cpp
    //创建是在SurfaceFlinger创建Layer时创建的
    //归还graphicbuffer给SurfaceFlinger
    status_t err = mGraphicBufferProducer->queueBuffer(i, input, &output);
    ...
    return err;
}

mGraphicBufferProducer是BpGraphicBufferProducer型代理对象,是SurfaceFlinger在创建Layer时创建的。SurfaceFlinger在创建Layer时会创建一对生产者消费者–producer、consumer。mGraphicBufferProducer就是该生产者producer的代理对象。

至此 CameraService完成了归还承载帧数据的GraphicBuffer给SurfaceFlinger。

接下来分析下SurfaceFlinger是如何合成显示CameraService传递来的帧数据的。

2. SurfaceFlinger合成显示CameraService传递来的帧数据。

现在相机已经完成了帧数据的生产,CameraService接着将承载了帧数据的graphicbuffer提交给SurfaceFlinger,由SurfaceFlinger做最终的合成显示流程,其简易的流程图如下:

在这里插入图片描述
图片来源于Android中的GraphicBuffer同步机制-Fence

在1.小节结介绍了mGraphicBufferProducer是SurfaceFlinger在创建Layer时创建的生产者producer代理对象,我们先分析下GraphicBufferProducer创建流程。

2.1 GraphicBufferProducer创建。

通过Android Activity创建surface的流程的分析,可知SurfaceFlinger服务创建Layer的代码如下:

//frameworks\native\services\surfaceflinger\Layer.cpp
void Layer::onFirstRef() {
    
    
    // Creates a custom BufferQueue for SurfaceFlingerConsumer to use
    sp<IGraphicBufferProducer> producer;
    sp<IGraphicBufferConsumer> consumer;
    //创建producer、consumer、bufferqueue对象
    BufferQueue::createBufferQueue(&producer, &consumer, true);
    //将producer封装为MonitoredProducer类对象
    mProducer = new MonitoredProducer(producer, mFlinger, this);
    //将consumer封装为SurfaceFlingerConsumer类对象
    mSurfaceFlingerConsumer = new SurfaceFlingerConsumer(consumer, mTextureName, this);
    mSurfaceFlingerConsumer->setConsumerUsageBits(getEffectiveUsage(0));
    //给mSurfaceFlingerConsumer设置帧回调函数
    //在收到帧数据时,会触发onFrameAvailable回调
    mSurfaceFlingerConsumer->setContentsChangedListener(this);
    ....
}

通过上述代码可知,创建Layer,实际上是创建了一对生产者消费者mProducer 、mSurfaceFlingerConsumer,和一个buferQueue。而CameraService持有的Surface其实是mProducer 的代理对象。

2.2 SurfaceFlinger 接收CameraService归还的GraphicBuffer帧数据

SurfaceFlinger 接收CameraService归还的GraphicBuffer帧数据的入口为:BnGraphicBufferProducer::onTransact,从上代码如下:

status_t BnGraphicBufferProducer::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    
    
    switch(code) {
    
    
        .....
        case QUEUE_BUFFER: {
    
    
            ....
            int buf = data.readInt32();
            QueueBufferInput input(data);
            QueueBufferOutput output;
            status_t result = queueBuffer(buf, input, &output);
            reply->write(output);
            reply->writeInt32(result);
            return NO_ERROR;
        }
        case CANCEL_BUFFER: {
    
    
      .....
    }
    return BBinder::onTransact(code, data, reply, flags);
}

进入到BufferQueueProducer::queueBuffer(…)

//frameworks\native\libs\gui\BufferQueueProducer.cpp
status_t BufferQueueProducer::queueBuffer(int slot,
        const QueueBufferInput &input, QueueBufferOutput *output) {
    
    
    .....

    sp<IConsumerListener> frameAvailableListener;
    sp<IConsumerListener> frameReplacedListener;
    int callbackTicket = 0;
    uint64_t currentFrameNumber = 0;
    //创建一个空的item
    BufferItem item;
    {
    
     // Autolock scope
        Mutex::Autolock lock(mCore->mMutex);
        //queueBuffer前做一些有效性检查
        ....
       //检查通过后,会打印这个log
        BQ_LOGV("queueBuffer: slot=%d/%" PRIu64 " time=%" PRIu64 " dataSpace=%d"
                " crop=[%d,%d,%d,%d] transform=%#x scale=%s",
                slot, mCore->mFrameCounter + 1, requestedPresentTimestamp,
                dataSpace, crop.left, crop.top, crop.right, crop.bottom,
                transform,
                BufferItem::scalingModeName(static_cast<uint32_t>(scalingMode)));

        const sp<GraphicBuffer>& graphicBuffer(mSlots[slot].mGraphicBuffer);
        .....
        //填充mSlots的acquireFence
        mSlots[slot].mFence = acquireFence;
        //修改mSlots[slot]的状态为queue
        mSlots[slot].mBufferState.queue();

        // Increment the frame counter and store a local version of it
        // for use outside the lock on mCore->mMutex.
        ++mCore->mFrameCounter;
        currentFrameNumber = mCore->mFrameCounter;
        mSlots[slot].mFrameNumber = currentFrameNumber;
        //将mSlots[slot]的相关信息赋值给item
        item.mAcquireCalled = mSlots[slot].mAcquireCalled;
        item.mGraphicBuffer = mSlots[slot].mGraphicBuffer;
        item.mDataSpace = dataSpace;
        item.mFrameNumber = currentFrameNumber;
        item.mSlot = slot;
        item.mFence = acquireFence;
        item.mFenceTime = acquireFenceTime;
        ....
        //将item插入mCore->mQueue队列
        //mCore->mQueue是FIFO队列
        if (mCore->mQueue.empty()) {
    
    
            // When the queue is empty, we can ignore mDequeueBufferCannotBlock
            // and simply queue this buffer
            //如果mCore->mQueue为空。直接入列
            mCore->mQueue.push_back(item);
            //获取mCore->mConsumerListener并赋值给frameAvailableListener
            frameAvailableListener = mCore->mConsumerListener;
        } else {
    
    
            // When the queue is not empty, we need to look at the last buffer
            // in the queue to see if we need to replace it
            const BufferItem& last = mCore->mQueue.itemAt(
                    mCore->mQueue.size() - 1);
            //检查mCore->mQueue中最后的BufferItem是否能被替换
            if (last.mIsDroppable) {
    
    
            ....
            } else {
    
    
               //item插入mCore->mQueue中
                mCore->mQueue.push_back(item);
                //获取mCore->mConsumerListener并赋值给frameAvailableListener
                frameAvailableListener = mCore->mConsumerListener;
            }
        }
        mCore->mBufferHasBeenQueued = true;
        mCore->mDequeueCondition.broadcast();
        mCore->mLastQueuedSlot = slot;
        //更新output
        output->width = mCore->mDefaultWidth;
        output->height = mCore->mDefaultHeight;
        output->transformHint = mCore->mTransformHint;
        output->numPendingBuffers = static_cast<uint32_t>(mCore->mQueue.size());
        output->nextFrameNumber = mCore->mFrameCounter + 1;
        ....
    } // Autolock scope
    .....
    int connectedApi;
    sp<Fence> lastQueuedFence;

    {
    
     // scope for the lock
        .....
        //触发onFrameAvailable回调
        if (frameAvailableListener != NULL) {
    
    
            frameAvailableListener->onFrameAvailable(item);
        } else if (frameReplacedListener != NULL) {
    
    
            frameReplacedListener->onFrameReplaced(item);
        }
        ....
        ++mCurrentCallbackTicket;
        mCallbackCondition.broadcast();
    }
    ....
    return NO_ERROR;
}

通过上边的分析,SurfaceFlinger在接受到CameraService归还的graphicbuffer后,会将其插入到
mCore->mQueue中,然后触发mCore->mConsumerListener->onFrameAvailable(item)回调。

2.5 帧回调onFrameAvailable函数分析

在创建Layer是会通过mSurfaceFlingerConsumer->setContentsChangedListener(this)
给消费者设置帧回调函数onFrameAvailable
其代码如下:

void Layer::onFrameAvailable(const BufferItem& item) {
    
    
    // Add this buffer from our internal queue tracker
    {
    
     // Autolock scope
        ...
        // Ensure that callbacks are handled in order
        ...
        //将一帧图形入列到mQueueItems
        mQueueItems.push_back(item);
        android_atomic_inc(&mQueuedFrames);
        // Wake up any pending callbacks
        mLastFrameNumberReceived = item.mFrameNumber;
        mQueueItemCondition.broadcast();
    }
    //发送MessageQueue::INVALIDATE消息
    //通知SurfaceFlinger,当前Layer有一帧图像缓存(相机预览数据)入列,
    //请求surfaceFinger来做最终的合成显示
    mFlinger->signalLayerUpdate();
}

简单总结下:

  1. SurfaceFlinger将CameraService归还的GraphicBuffer插入到了mCore->mQueue
  2. 触发Layer的onFrameAvailable方法,将GraphicBuffer插入到Layer的mQueueItems队列中,并通知SurfaceFlinger有Layer更新。

SurfaceFlinger在接收到新的帧数据后,并不会紧接着进行合成显示,而是会等待下次Vsync的到来在会进行真正的合成显示。
下边简要介绍下SurfaceFlinger合成显示流程

2.5 SurfaceFlinger合成显示流程分析

关于Vsync相关知识请参考下边的两篇文章,本文就不做详细分析了。
Android O 以前版本的Vsync的产生及回调到surfaceFlinger的流程,请参考
Android垂直同步信号VSync的产生及传播结构详解
AndroidO Vsync的产生及传递到surfaceFlinger流程,请参考
AndroidO Vsync的产生及分发给surfaceFlinger流程学习

在收到vsync时,SurfaceFlinger首先会接受到MessageQueue::INVALIDATE消息进行处理,处理函数为
SurfaceFlinger::onMessageReceived。
代码如下:

//frameworks\native\services\surfaceflinger\SurfaceFlinger.cpp
void SurfaceFlinger::onMessageReceived(int32_t what) {
    
    

   switch (what) {
    
    
       case MessageQueue::INVALIDATE: {
    
    
           ...
           bool refreshNeeded = handleMessageTransaction();
           //收到MessageQueue::INVALIDATE消息
           //获取所有有更新的Layer,调用其latchBuffer方法
           //其会通过acquireBuffer获取Layer当前需要合成显示的GraphicBuffer
           //然后通过该GraphicBuffer生成一个EGLImageKHR图像
           //并将其绑定到Texture2D纹理上
           refreshNeeded |= handleMessageInvalidate();
           refreshNeeded |= mRepaintEverything;
           if (refreshNeeded) {
    
    
               //更新完成,从送MessageQueue::REFRESH消息
               //通知SurfaceFlinger进行合成显示
               signalRefresh();
           }
           break;
       }
       case MessageQueue::REFRESH: {
    
    
           //收到MessageQueue::REFRESH消息
           //进行合成显示
           handleMessageRefresh();
           break;
       }
   }
}

上述过程非常复杂,我们只介绍下其中的两个我们关心的流程

  1. handleMessageInvalidate
  2. handleMessageRefresh

2.5.1 handleMessageInvalidate流程

SurfaceFlinger在收到MessageQueue::INVALIDATE消息后,会通过handleMessageInvalidate
获取所有有更新的Layer,然后通过acquireBuffer获取当前Layer需要合成显示的GraphicBuffer
然后通过该GraphicBuffer生成一个EGLImageKHR图像并将其绑定到Texture2D纹理上,代码如下:

 //frameworks\native\services\surfaceflinger\SurfaceFlinger.cpp
 bool SurfaceFlinger::handleMessageInvalidate() {
    
    
    return handlePageFlip();
}

接着分析下handlePageFlip

//frameworks\native\services\surfaceflinger\SurfaceFlinger.cpp
bool SurfaceFlinger::handlePageFlip()
{
    
    
  ...
  //所以SurfaceFlinger两个状态:
  //mCurrentState状态,准备数据,应用传过来的数据保存在mCurrentState中。
  //mDrawingState状态,进程合成状态,需要进行合成的数据保存在mDrawingState中。

  //获取所有mDrawingState的Layer,
  //如果Layer有新入列的buffer且需要显示
  //则将其插入mLayersWithQueuedFrames队列中
  mDrawingState.traverseInZOrder([&](Layer* layer) {
    
    
      if (layer->hasQueuedFrame()) {
    
    
          frameQueued = true;
          if (layer->shouldPresentNow(mPrimaryDispSync)) {
    
    
             //将layer插入mLayersWithQueuedFrames队列中
              mLayersWithQueuedFrames.push_back(layer);
          } 
          ...
      } 
  });
  
  //遍历mLayersWithQueuedFrames中的所有Layer
  //通过layer的latchBuffer方法向mCore->mQueue获取当前Layer需合成显示的GraphicBuffer
  //并将其绑定到Texture2D纹理上
  for (auto& layer : mLayersWithQueuedFrames) {
    
    
      const Region dirty(layer->latchBuffer(visibleRegions, latchTime));
  }
  ...
  if (frameQueued..) {
    
    
     //至此,所有需要更新的Layer都更新完成后,
     //通知SurfaceFlinger进行合成
     signalLayerUpdate();
  }
  ...
}

下边分析下layer->latchBuffer(visibleRegions, latchTime)

//frameworks\native\services\surfaceflinger\Layer.cpp
Region Layer::latchBuffer(bool& recomputeVisibleRegions, nsecs_t latchTime)
{
    
    
	...
	//将当前帧置为旧帧
	sp<GraphicBuffer> oldActiveBuffer = mActiveBuffer;
	...
	//更新Layer中的消费者mSurfaceFlingerConsumer成员
	//mSurfaceFlingerConsumer会通过acquireBuffer获取mCore->mQueue中需合成显示的GraphicBuffer
	//就是CameraService归还的GraphicBuffer,前边已经分析过其会插入的mCore->mQueue中。
	//获取GraphicBuffer之后,会由该GraphicBuffer生成一个EGLImageKHR图像
	//然后通过glEGLImageTargetTexture2DOES将其绑定到Texture2D纹理上
	status_t updateResult = mSurfaceFlingerConsumer->updateTexImage(&r,
		mFlinger->mPrimaryDispSync, &mAutoRefresh, &queuedBuffer,
		mLastFrameNumberReceived);
	...
	//获取消费者mSurfaceFlingerConsumer当前需合成显示的GraphicBuffer
	//赋值给Layer的mActiveBuffer成员
	//update the active buffer
	mActiveBuffer = mSurfaceFlingerConsumer->getCurrentBuffer(
		&mActiveBufferSlot);
	...
	//返回Layer的DirtyRegion
	return outDirtyRegion;
}

下边分析下mSurfaceFlingerConsumer->updateTexImage(.....)是如何获取当前的graphicbuffer并绑定到纹理上的。

status_t SurfaceFlingerConsumer::updateTexImage(BufferRejecter* rejecter,
	const DispSync& dispSync, bool* autoRefresh, bool* queuedBuffer,
	uint64_t maxFrameNumber)
{
    
    
	...
	// Make sure the EGL state is the same as in previous calls.
	status_t err = checkAndUpdateEglStateLocked();
	...
	BufferItem item;
	...
	//通过acquireBuffer获取
	//mCore->mQueue中需合成显示的GraphicBuffer
	//后边肯定存在一个releaseBuffer归还GraphicBuffer给mCore->mQueue
	err = acquireBufferLocked(&item, computeExpectedPresent(dispSync),
			maxFrameNumber);
	...
	//更新消费者SurfaceFlingerConsumer成员,
	//如mCurrentTextureImage、mCurrentFence
	//getCurrentBuffer方法就是获取的mCurrentTextureImage对象
	//另外一个很重要的任务就是releaseBuffer归还旧的GraphicBuffer
	err = updateAndReleaseLocked(item, &mPendingRelease);
	...
	if (!SyncFeatures::getInstance().useNativeFenceSync()) {
    
    
		//将获取的graphicbuffer绑定到Texture2D纹理上
		err = bindTextureImageLocked();
	}
    ...
}

在updateTexImage主要完成流程

  1. 通过acquireBufferLocked向BufferQueue队列(FIFO)申请获取首个元素
  2. 为mCurrentTextureImage 插入Fence对象,当该对象触发时,说明该mCurrentTextureImage 被消费者使用完成
  3. 通过releaseBufferLocked归还mCurrentTextureImage
  4. 使用acquireBufferLocked获取的BufferItem 更新mCurrentTextureImage

我们分析下updateAndReleaseLocked函数

status_t GLConsumer::updateAndReleaseLocked(const BufferItem& item,
        PendingRelease* pendingRelease)
{
    
    
    status_t err = NO_ERROR;
    int slot = item.mSlot;
    ...
    // Confirm state.
    err = checkAndUpdateEglStateLocked();
    ...
    err = mEglSlots[slot].mEglImage->createIfNeeded(mEglDisplay, item.mCrop);
    ...
    //新获取的slot和之前获取的mCurrentTexture不相同
    //即获取了一个新帧,需要
    //为mCurrentTextureImage对应的GraphicBuffer插入一个fence
    // Do whatever sync ops we need to do before releasing the old slot.
    if (slot != mCurrentTexture) {
    
    
        err = syncForReleaseLocked(mEglDisplay);
        ...
    }
    ...
    sp<EglImage> nextTextureImage = mEglSlots[slot].mEglImage;

    // release old buffer
    if (mCurrentTexture != BufferQueue::INVALID_BUFFER_SLOT) {
    
    
        if (pendingRelease == nullptr) {
    
    
            //归还mCurrentTextureImage对象的
            //GraphicBuffer及其对应的Fence对象
            status_t status = releaseBufferLocked(
                    mCurrentTexture, mCurrentTextureImage->graphicBuffer(),
                    mEglDisplay, mEglSlots[mCurrentTexture].mEglFence);

        }
        ...
    }

    // Update the GLConsumer state.
    mCurrentTexture = slot;
    mCurrentTextureImage = nextTextureImage;
    mCurrentCrop = item.mCrop;
    mCurrentTransform = item.mTransform;
    mCurrentScalingMode = item.mScalingMode;
    mCurrentTimestamp = item.mTimestamp;
    mCurrentDataSpace = item.mDataSpace;
    //acqireFence
    mCurrentFence = item.mFence;
    mCurrentFenceTime = item.mFenceTime;
    mCurrentFrameNumber = item.mFrameNumber;
    ...
    return err;
}

下边分下syncForReleaseLocked流程

status_t GLConsumer::syncForReleaseLocked(EGLDisplay dpy) {
    
    
    if (mCurrentTexture != BufferQueue::INVALID_BUFFER_SLOT) {
    
    
 		if (mUseFenceSync && SyncFeatures::getInstance().useFenceSync()) {
    
    
            EGLSyncKHR fence = mEglSlots[mCurrentTexture].mEglFence;
            if (fence != EGL_NO_SYNC_KHR) {
    
    
                // There is already a fence for the current slot.  We need to
                // wait on that before replacing it with another fence to
                // ensure that all outstanding buffer accesses have completed
                // before the producer accesses it.
                //如果mCurrentTexture已经存在fence了则
                //阻塞CPU,等待该fence被触发
                EGLint result = eglClientWaitSyncKHR(dpy, fence, 0, 1000000000);
                ...
                eglDestroySyncKHR(dpy, fence);
            }

            // Create a fence for the outstanding accesses in the current
            // OpenGL ES context.
            //插入下个Fence对象,当该fence被触发时,说明
            //消费者已经使用完毕该GraphicBuffer
            fence = eglCreateSyncKHR(dpy, EGL_SYNC_FENCE_KHR, NULL);
            ...
            //立即提交GL Command 给GPU
            glFlush();
            mEglSlots[mCurrentTexture].mEglFence = fence;
        }
    }

    return OK;
}

下边分析下bindTextureImageLocked

 status_t GLConsumer::bindTextureImageLocked() {
    
    
    //绑定纹理
    glBindTexture(mTexTarget, mTexName);
   //调用eglCreateImageKHR,创建一个EGLImageKHR对象
   //EGLImageKHR image = eglCreateImageKHR(
   //    dpy, EGL_NO_CONTEXT,
   //    EGL_NATIVE_BUFFER_ANDROID,
   //    cbuf,
   //    attrs);
    status_t err = mCurrentTextureImage->createIfNeeded(mEglDisplay,
                                                        mCurrentCrop);
   //绑定EGLImageKHR到mTexName纹理上,调用的Openg command为
   //glEGLImageTargetTexture2DOES(texTarget, static_cast<GLeglImageOES>(mEglImage));
    mCurrentTextureImage->bindToTextureTarget(mTexTarget);
    .....
    //插入一个同步信号
    // Wait for the new buffer to be ready.
    //实际调用的Openg command为
    //EGLSyncKHR sync = eglCreateSyncKHR(dpy,
    //    EGL_SYNC_NATIVE_FENCE_ANDROID, attribs);
    //eglWaitSyncKHR(dpy, sync, 0);
    return doGLFenceWaitLocked();
}

至此完成了handleMessageInvalidate的分析

简单总结下:

  1. 首先通过acquireBufferLocked获取mCore->mQueue中需要刷新的graphicbuffer
  2. 通过eglCreateImageKHR由该graphicbuffer创建一个EGLImageKHR图像
  3. 通过glEGLImageTargetTexture2DOES将该EGLImageKHR图像绑定到mTexTarget纹理上
  4. 插入一个Fence 同步信号并等待Fence被触发

2.5.2 handleMessageRefresh分析

SurfaceFlinger通过signalRefresh发送MessageQueue::REFRESH消息来完成剩余的合成显示流程,
其对应的处理函数为handleMessageRefresh()
现在直接分析下handleMessageRefresh()

 //frameworks\native\services\surfaceflinger\SurfaceFlinger.cpp
 void SurfaceFlinger::handleMessageRefresh() {
    
    
    ..
    //合成预处理工作
    preComposition(refreshStartTime);
    //计算和存储每个Layer的脏区域
    rebuildLayerStacks();
    //创建HWC硬件合成的任务列表
    setUpHWComposer();
    doDebugFlashRegions();
    //先进行GL合成,将合成后的图像放在HWC任务列表的最后
    //然后有HWC进行合成并输出到屏幕
    doComposition();
    //合成善后工作
    postComposition(refreshStartTime);
   ....
}

这个过程还是非常复杂的,我们这分析下我们关心的Layer合成显示的地方。
SurfacFlinger在进行合成显示的时候,首先通过setUpHWComposer需要向HWComposer申请创建一个WorkList,该WorkList存储了本次合成显示所有需要更新的内容,在doComposition会将该WorkList提交给HWC做最后的合成显示。

下边我们重点分析下setUpHWComposer

void SurfaceFlinger::setUpHWComposer() {
    
    
    ....
    HWComposer& hwc(getHwComposer());
    if (hwc.initCheck() == NO_ERROR) {
    
    
        // build the h/w work list
        if (CC_UNLIKELY(mHwWorkListDirty)) {
    
    
            mHwWorkListDirty = false;
            for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
    
    
                sp<const DisplayDevice> hw(mDisplays[dpy]);
                const int32_t id = hw->getHwcDisplayId();
                if (id >= 0) {
    
    
                   //获取当前显示设备需要显示的Layer列表,存放在VisibleLayersSortedByZ列表中
                    const Vector< sp<Layer> >& currentLayers(
                        hw->getVisibleLayersSortedByZ());
                    const size_t count = currentLayers.size();
                    //调用HWComposer的createWorkList创建工作列表,指定显示设备ID和Layer数量
                    if (hwc.createWorkList(id, count) == NO_ERROR) {
    
    
                        HWComposer::LayerListIterator cur = hwc.begin(id);
                        const HWComposer::LayerListIterator end = hwc.end(id);
                        for (size_t i=0 ; cur!=end && i<count ; ++i, ++cur) {
    
    
                            const sp<Layer>& layer(currentLayers[i]);
                            //通过Layer给设置HWC Layer的混合模式,裁剪区域,定点数组等信息
                            layer->setGeometry(hw, *cur);
                            ....
                        }
                    }
                }
            }
        }

        // set the per-frame data
        for (size_t dpy=0 ; dpy<mDisplays.size() ; dpy++) {
    
    
            sp<const DisplayDevice> hw(mDisplays[dpy]);
            const int32_t id = hw->getHwcDisplayId();
            if (id >= 0) {
    
    
                const Vector< sp<Layer> >& currentLayers(
                    hw->getVisibleLayersSortedByZ());
                const size_t count = currentLayers.size();
                HWComposer::LayerListIterator cur = hwc.begin(id);
                const HWComposer::LayerListIterator end = hwc.end(id);
                for (size_t i=0 ; cur!=end && i<count ; ++i, ++cur) {
    
    
                    /*
                     * update the per-frame h/w composer data for each layer
                     * and build the transparent region of the FB
                     */
                    const sp<Layer>& layer(currentLayers[i]);
                    //通过Layer的setPerFrameData方法,设置HWC Layer的图形数据,
                    //将Buffer指给HWC 的Layer
                    layer->setPerFrameData(hw, *cur);
                }
            }
        }
        // If possible, attempt to use the cursor overlay on each display.
        ...
        //调用HWC的prepare,做硬件合成准备
        status_t err = hwc.prepare();
        ...
    }
}

setUpHWComposer主要工作:

  1. 调用HWComposer的createWorkList创建合成任务列表
  2. 遍历所有的Layer为createWorkList创建的HwLayer设置图形数据,geometry,blendmode等信息
  3. 调用HWComposer的prepare做合成准备

通过setUpHWComposer后,已经将需合成显示的Layer的数据数据,geometry,blendmode等信息设置给通过createWorkList创建的任务列表WorkList。
WorkList中如果存在需要GPU合成的Layer,则会在doComposition中进行GPU合成。

GPU合成显示是在doComposition中完成的,doComposition最终又是在doDisplayComposition来完成合成的:

void SurfaceFlinger::doDisplayComposition(
        const sp<const DisplayDevice>& displayDevice,
        const Region& inDirtyRegion)
{
    
    
    ...
    //遍历所有需要更新的Layer进行合成
    //采用GPU合成
    //当前EGLContext绑定的EGLSurface为FramebufferSurface
    if (!doComposeSurfaces(displayDevice, dirtyRegion)) return;
    ...
    //合成完成后,通过eglSwapBuffers交换前后graphicbuffer
    //会将合成好的GraphicBuffer queue给FramebufferSurface
    //然后再向FramebufferSurface dequeue一个新的GraphicBuffer
    displayDevice->swapBuffers(getHwComposer());
    ...
}

我们分析下doComposeSurfaces

//SurfaceFlinger.cpp
bool SurfaceFlinger::doComposeSurfaces(
        const sp<const DisplayDevice>& displayDevice, const Region& dirty)
{
    
    
    ...
    const auto hwcId = displayDevice->getHwcDisplayId();
    .....
    bool hasClientComposition = mHwc->hasClientComposition(hwcId);
    if (hasClientComposition) {
    
    
         ...
        //调用的OpenGL command为
        //eglMakeCurrent
        //glViewport(0, 0, vpw, vph);
        //获取透视矩阵mProjectionMatrix
        if (!displayDevice->makeCurrent(mEGLDisplay, mEGLContext)) {
    
    
           ....
        }

        // Never touch the framebuffer if we don't have any framebuffer layers
        //调用的OpenGL command为
        //glClearColor(0.f, 0.f, 0.f, .0)
        //glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)
        const bool hasDeviceComposition = mHwc->hasDeviceComposition(hwcId);
        if (hasDeviceComposition) {
    
    
            // when using overlays, we assume a fully transparent framebuffer
            // NOTE: we could reduce how much we need to clear, for instance
            // remove where there are opaque FB layers. however, on some
            // GPUs doing a "clean slate" clear might be more efficient.
            // We'll revisit later if needed.
            mRenderEngine->clearWithColor(0, 0, 0, 0);
        } else {
    
    
            ....
            // screen is already cleared here
            if (!region.isEmpty()) {
    
    
                // can happen with SurfaceView
                drawWormhole(displayDevice, region);
            }
        }
        //如果不是主显示屏还需要设置glScissor
        if (displayDevice->getDisplayType() != DisplayDevice::DISPLAY_PRIMARY) {
    
    
			....
			//调用的OpenGL command为
			//glScissor
			mRenderEngine->setScissor(scissor.left, height - scissor.bottom,
			       scissor.getWidth(), scissor.getHeight());
        }

    /*
     * and then, render the layers targeted at the framebuffer
     */
    ...
    if (hwcId >= 0) {
    
    
        // we're using h/w composer
        bool firstLayer = true;
        //遍历所有需要更新的Layer
        //如果是HW合成,不需要处理该Layer,
        //如果是GPU合成,需要使用Opengl Draw call进行合成(glDrawElements..)
        for (auto& layer : displayDevice->getVisibleLayersSortedByZ()) {
    
    
                ...
                switch (layer->getCompositionType(hwcId)) {
    
    
                    case HWC2::Composition::Cursor:
                    case HWC2::Composition::Device:
                    case HWC2::Composition::Sideband:
                    case HWC2::Composition::SolidColor: {
    
    
                        ...
						//如果不是Client合成,但是有其他Layer是Client合成时,
						//需要将Layer在 FBTarget中对应的区域清理掉clearWithOpenGL,
						//清理掉的区域HWC合成
                        layer->clearWithOpenGL(displayDevice);
                        break;
                    }
                    case HWC2::Composition::Client: {
    
    
                         //GPU合成或者称为client合成
                         //调用的OpenGL command为
                         //glDrawElements
                        layer->draw(displayDevice, clip);
                        break;
                    }
                    ....
                }
            firstLayer = false;
        }
    }
    // disable scissor at the end of the frame
    mRenderEngine->disableScissor();
    return true;
}

doDisplayComposition后会通过displayDevice->swapBuffers(getHwComposer())将GPU合成的帧数据提交个FrameBufferSurface,FrameBufferSurface在收到帧数据后,会通过setFramebufferTarget将GPU合成的帧数据填充到
workList最后一个Layer中。
代码如下:

void FramebufferSurface::onFrameAvailable(const BufferItem& /* item */) {
    
    
    sp<GraphicBuffer> buf;
    sp<Fence> acquireFence;
    //从BufferQueue队列中获取新的Buffer, 不再详细分析
    status_t err = nextBuffer(buf, acquireFence);
    //调用HWC的fbPost来输出Buffer
    err = mHwc.fbPost(mDisplayType, acquireFence, buf);
}
int HWComposer::fbPost(int32_t id,
        const sp<Fence>& acquireFence, const sp<GraphicBuffer>& buffer) {
    
    
    //如果有HWC硬件设备,则调用setFramebufferTarget将GL合成的图像放到workList最后一个Layer中
    if (mHwc && hwcHasApiVersion(mHwc, HWC_DEVICE_API_VERSION_1_1)) {
    
    
        return setFramebufferTarget(id, acquireFence, buffer);
    } else {
    
    
    //没有HWC硬件,则直接调用Gralloc模块的mFbDev的post函数,将图像设置到FrameBuffer显存,进行显示。
        acquireFence->waitForever("HWComposer::fbPost");
        return mFbDev->post(mFbDev, buffer->handle);
    }
}
status_t HWComposer::setFramebufferTarget(int32_t id,
        const sp<Fence>& acquireFence, const sp<GraphicBuffer>& buf) {
    
    
    ....
    int acquireFenceFd = -1;
    if (acquireFence->isValid()) {
    
    
        acquireFenceFd = acquireFence->dup();
    }

    //将GraphicBuffer handle赋值给fbTargetHandle
    disp.fbTargetHandle = buf->handle;
    //将GraphicBuffer handle赋值给framebufferTarget handle 
    disp.framebufferTarget->handle = disp.fbTargetHandle;
    //将acquireFenceFd值给framebufferTarget acquireFenceFd  
    disp.framebufferTarget->acquireFenceFd = acquireFenceFd;
    return NO_ERROR;
}

将GPU合成的数据填充给framebufferTarget后,接着会调用postFramebuffer将WorkList提交个Hwc做最终的合成显示,代码如下:

void SurfaceFlinger::postFramebuffer()
{
    
    
    ...
    if (hwc.initCheck() == NO_ERROR) {
    
    
        ....
        hwc.commit();
    }
    ....
}
status_t HWComposer::commit() {
    
    
    int err = NO_ERROR;
    if (mHwc) {
    
    
        ...
        //mLists为hwc_display_contents_1*型的workList
        err = mHwc->set(mHwc, mNumDisplays, mLists);
        ...
}

整个流程示意图如下:

在这里插入图片描述
至此Android Camera帧数据合成显示的流程基本分析完成了,接下来会继续研究下HWC是如何实现合成显示的,待续。。。

本文主要参考的文章有:

  1. SurfaceFlinger图像合成[2]
  2. Android P 图形显示系统(八) SurfaceFlinger合成流程(三)

猜你喜欢

转载自blog.csdn.net/u010116586/article/details/100114257