虚拟摄像头之三: 重构android8.1 的 v4l2_camera_HAL 支持虚拟摄像头

前言

接下来将了解Android HAL是如何与相机设备进行交互的,一般各硬件厂商的 camera HAL会有个
v4l2_camera_hal.cpp 文件,在这个文件中向frameworks提供HAL的对外接口,该文件会通过
HAL_MODULE_INFO_SYM 修饰一个 camera_module_t 结构体;camera Provider服务就是
通过 HAL_MODULE_INFO_SYM 找到 camera_module_t,从而操作Camera HAL达到操作camera设备。
下一篇《谁在调用 v4l2_camera_HAL 摄像头驱动》中会描述android系统frameworks装载cameraHAL
模块过程,我们共同梳理系统是如何通过C/S模式向android系统上用户提供camera服务功能。

本篇只关注 camera HAL框架实现内容,因笔者需要使用虚拟摄像头给android用户提供摄像头功能,
续重构 camera hal部分源码,因此我们以android系统提供的camera-hal样本例程为分析对象。

camera-hal 模块入口在哪里

源码路径 @hardware/libhardware/modules/camera/3_4/ 下,其中v4l2_camera_hal.cpp是入口
函数

static int open_dev(const hw_module_t* module,
                    const char* name,
                    hw_device_t** device) {
    
    
  return gCameraHAL.openDevice(module, name, device);
}

}  // namespace v4l2_camera_hal

static hw_module_methods_t v4l2_module_methods = {
    
    
    .open = v4l2_camera_hal::open_dev};

camera_module_t HAL_MODULE_INFO_SYM __attribute__((visibility("default"))) = {
    
    
    .common =
        {
    
    
            .tag = HARDWARE_MODULE_TAG,
            .module_api_version = CAMERA_MODULE_API_VERSION_2_4,
            .hal_api_version = HARDWARE_HAL_API_VERSION,
            .id = CAMERA_HARDWARE_MODULE_ID,
            .name = "V4L2 Camera HAL v3",
            .author = "The Android Open Source Project",
            .methods = &v4l2_module_methods,
            .dso = nullptr,
            .reserved = {
    
    0},
        },
    .get_number_of_cameras = v4l2_camera_hal::get_number_of_cameras,
    .get_camera_info = v4l2_camera_hal::get_camera_info,
    .set_callbacks = v4l2_camera_hal::set_callbacks,
    .get_vendor_tag_ops = v4l2_camera_hal::get_vendor_tag_ops,
    .open_legacy = v4l2_camera_hal::open_legacy,
    .set_torch_mode = v4l2_camera_hal::set_torch_mode,
    .init = nullptr,
    .reserved = {
    
    nullptr, nullptr, nullptr, nullptr, nullptr}};

static V4L2CameraHAL gCameraHAL 是静态全局变量,申明也在此文件中,V4L2CameraHAL的构造函数和申明如下:

namespace v4l2_camera_hal {

// Default global camera hal.
static V4L2CameraHAL gCameraHAL;

V4L2CameraHAL::V4L2CameraHAL() : mCameras(), mCallbacks(NULL) {
  HAL_LOG_ENTER();
  // Adds all available V4L2 devices.
  // List /dev nodes.
  DIR* dir = opendir("/dev");
  if (dir == NULL) {
    HAL_LOGE("Failed to open /dev");
    return;
  }
  // Find /dev/video* nodes.
  dirent* ent;
  std::vector<std::string> nodes;
  while ((ent = readdir(dir))) {
    std::string desired = "video";
    size_t len = desired.size();
    if (strncmp(desired.c_str(), ent->d_name, len) == 0) {
      if (strlen(ent->d_name) > len && isdigit(ent->d_name[len])) {
        // ent is a numbered video node.
        nodes.push_back(std::string("/dev/") + ent->d_name);
        HAL_LOGV("Found video node %s.", nodes.back().c_str());
      }
    }
  }
  // Test each for V4L2 support and uniqueness.
  std::unordered_set<std::string> buses;
  std::string bus;
  v4l2_capability cap;
  int fd;
  int id = 0;
  for (const auto& node : nodes) {
    // Open the node.
    fd = TEMP_FAILURE_RETRY(open(node.c_str(), O_RDWR));
    if (fd < 0) {
      HAL_LOGE("failed to open %s (%s).", node.c_str(), strerror(errno));
      continue;
    }
    // Read V4L2 capabilities.
    if (TEMP_FAILURE_RETRY(ioctl(fd, VIDIOC_QUERYCAP, &cap)) != 0) {
      HAL_LOGE(
          "VIDIOC_QUERYCAP on %s fail: %s.", node.c_str(), strerror(errno));
    } else if (!(cap.capabilities & V4L2_CAP_VIDEO_CAPTURE)) {
      HAL_LOGE("%s is not a V4L2 video capture device.", node.c_str());
    } else {
      // If the node is unique, add a camera for it.
      bus = reinterpret_cast<char*>(cap.bus_info);
      if (buses.insert(bus).second) {
        HAL_LOGV("Found unique bus at %s.", node.c_str());
        std::unique_ptr<V4L2Camera> cam(V4L2Camera::NewV4L2Camera(id++, node));
        if (cam) {
          mCameras.push_back(std::move(cam));
        } else {
          HAL_LOGE("Failed to initialize camera at %s.", node.c_str());
        }
      }
    }
    TEMP_FAILURE_RETRY(close(fd));
  }
}
};

在加载 camera.v4l2.so 库时,将会调用 V4L2CameraHAL 类的构造函数,在构造函数中,主要是探测 /dev/ 目录下有多少个 video 节点且支持V4L2 video capture,
并将探测结果保存在 std::vector<std::unique_ptr<default_camera_hal::Camera>> mCameras; 容器中,此容器是 V4L2CameraHAL 类的私有变量。该容器中存放
的对象是 v4l2_camera_hal::V4L2Camera 类的实例,该 V4L2Camera类是继承 v4l2_camera_hal::Camera 类。程序中调用mCameras对象方法就是来源与上述两个类。

camera 对象构造过程

当系统启动 camera-provider-2-4 服务时、会加载 camera.v4l2.so 库,调用 V4L2CameraHAL 构造函数,此构造函数中调用 V4L2Camera::NewV4L2Camera(id++, node)
构造摄像头对象,我们先了解下相关类的定义:
@hardware/libhardware/modules/camera/3_4/v4l2_camera.h

namespace v4l2_camera_hal {
    
    
// V4L2Camera is a specific V4L2-supported camera device. The Camera object
// contains all logic common between all cameras (e.g. front and back cameras),
// while a specific camera device (e.g. V4L2Camera) holds all specific
// metadata and logic about that device.
class V4L2Camera : public default_camera_hal::Camera {
    
    
 public:
  // Use this method to create V4L2Camera objects. Functionally equivalent
  // to "new V4L2Camera", except that it may return nullptr in case of failure.
  static V4L2Camera* NewV4L2Camera(int id, const std::string path);
  ~V4L2Camera();

 private:
  // Constructor private to allow failing on bad input.
  // Use NewV4L2Camera instead.
  V4L2Camera(int id,
             std::shared_ptr<V4L2Wrapper> v4l2_wrapper,
             std::unique_ptr<Metadata> metadata);

  int enqueueRequest(
      std::shared_ptr<default_camera_hal::CaptureRequest> request) override;

  / Async request processing helpers.
  // Dequeue a request from the waiting queue.
  // Blocks until a request is available.
  std::shared_ptr<default_camera_hal::CaptureRequest> dequeueRequest();

  std::unique_ptr<Metadata> metadata_;
  std::mutex request_queue_lock_;
  std::queue<std::shared_ptr<default_camera_hal::CaptureRequest>>
      request_queue_;
  std::mutex in_flight_lock_;
  // Maps buffer index : request.
  std::map<uint32_t, std::shared_ptr<default_camera_hal::CaptureRequest>>
      in_flight_;
  // Threads require holding an Android strong pointer.
  android::sp<android::Thread> buffer_enqueuer_;
  android::sp<android::Thread> buffer_dequeuer_;
  std::condition_variable requests_available_;
  std::condition_variable buffers_in_flight_;

  int32_t max_input_streams_;
  std::array<int, 3> max_output_streams_;  // {raw, non-stalling, stalling}.
}

@hardware/libhardware/modules/camera/3_4/camera.h

namespace default_camera_hal {
    
    
// Camera represents a physical camera on a device.
// This is constructed when the HAL module is loaded, one per physical camera.
// TODO(b/29185945): Support hotplugging.
// It is opened by the framework, and must be closed before it can be opened
// again.
// This is an abstract class, containing all logic and data shared between all
// camera devices (front, back, etc) and common to the ISP.
class Camera {
    
    
    public:
        // id is used to distinguish cameras. 0 <= id < NUM_CAMERAS.
        // module is a handle to the HAL module, used when the device is opened.
        Camera(int id);
        virtual ~Camera();

        // Common Camera Device Operations (see <hardware/camera_common.h>)
        int openDevice(const hw_module_t *module, hw_device_t **device);
        int getInfo(struct camera_info *info);
        int close();

        // Camera v3 Device Operations (see <hardware/camera3.h>)
        int initialize(const camera3_callback_ops_t *callback_ops);
        int configureStreams(camera3_stream_configuration_t *stream_list);
        const camera_metadata_t *constructDefaultRequestSettings(int type);
        int processCaptureRequest(camera3_capture_request_t *temp_request);
        void dump(int fd);
        int flush();

    protected:
    	
    	...  //> 省略纯虚方法声明内容

        // Callback for when the device has filled in the requested data.
        // Fills in the result struct, validates the data, sends appropriate
        // notifications, and returns the result to the framework.
        void completeRequest(
            std::shared_ptr<CaptureRequest> request, int err);
        // Prettyprint template names
        const char* templateToString(int type);

    private:
        // Camera device handle returned to framework for use
        camera3_device_t mDevice;
        // Get static info from the device and store it in mStaticInfo.
        int loadStaticInfo();
        // Confirm that a stream configuration is valid.
        int validateStreamConfiguration(
            const camera3_stream_configuration_t* stream_config);
        // Verify settings are valid for reprocessing an input buffer
        bool isValidReprocessSettings(const camera_metadata_t *settings);
        // Pre-process an output buffer
        int preprocessCaptureBuffer(camera3_stream_buffer_t *buffer);
        // Send a shutter notify message with start of exposure time
        void notifyShutter(uint32_t frame_number, uint64_t timestamp);
        // Send an error message and return the errored out result.
        void completeRequestWithError(std::shared_ptr<CaptureRequest> request);
        // Send a capture result for a request.
        void sendResult(std::shared_ptr<CaptureRequest> request);
        // Is type a valid template type (and valid index into mTemplates)
        bool isValidTemplateType(int type);

        // Identifier used by framework to distinguish cameras
        const int mId;
        // CameraMetadata containing static characteristics
        std::unique_ptr<StaticProperties> mStaticInfo;
        // Flag indicating if settings have been set since
        // the last configure_streams() call.
        bool mSettingsSet;
        // Busy flag indicates camera is in use
        bool mBusy;
        // Camera device operations handle shared by all devices
        const static camera3_device_ops_t sOps;
        // Methods used to call back into the framework
        const camera3_callback_ops_t *mCallbackOps;
        // Lock protecting the Camera object for modifications
        android::Mutex mDeviceLock;
        // Lock protecting only static camera characteristics, which may
        // be accessed without the camera device open
        android::Mutex mStaticInfoLock;
        android::Mutex mFlushLock;
        // Standard camera settings templates
        std::unique_ptr<const android::CameraMetadata> mTemplates[CAMERA3_TEMPLATE_COUNT];
        // Track in flight requests.
        std::unique_ptr<RequestTracker> mInFlightTracker;
};
}  // namespace default_camera_hal

这两个头文件基本上涵盖了V4L2camera对象成员变量和方法内容, NewV4L2Camera() 构造方法返回是 static V4L2Camera 对象,
被存放到容器中,我们简单分析一下摄像头构造过程源码。
@hardware/libhardware/modules/camera/3_4/v4l2_camera.cpp

V4L2Camera* V4L2Camera::NewV4L2Camera(int id, const std::string path) {
    
    
  HAL_LOG_ENTER();
  //> 1. 构造 v4l2_wrapper 对象
  std::shared_ptr<V4L2Wrapper> v4l2_wrapper(V4L2Wrapper::NewV4L2Wrapper(path));
  if (!v4l2_wrapper) {
    
    
    HAL_LOGE("Failed to initialize V4L2 wrapper.");
    return nullptr;
  }

  //> 2. 获取该摄像头配置参数
  std::unique_ptr<Metadata> metadata;
  int res = GetV4L2Metadata(v4l2_wrapper, &metadata);
  if (res) {
    
    
    HAL_LOGE("Failed to initialize V4L2 metadata: %d", res);
    return nullptr;
  }
  //> 3. 构造出 V4L2Camera 对象
  return new V4L2Camera(id, std::move(v4l2_wrapper), std::move(metadata));
}

我们看到创建 V4L2Camera 对象,分为三步,

  1. 构造 v4l2_wrapper 对象;
  2. 获取该摄像头配置参数;
  3. 构造出 V4L2Camera 对象.

接下来我们把每步都梳理:

第一步 构造 v4l2_wrapper 对象

path 的内容 ‘/dev/videoX’ 其中 X 是编号,是 KERNEL 驱动创建的 video 设备,本例中是 V4L2loopback.ko 加载到内核
后生成的 摄像头是 /dev/video4 , v4l2_wrapper(V4L2Wrapper::NewV4L2Wrapper(path)) 对象申明执行内容如下:

V4L2Wrapper* V4L2Wrapper::NewV4L2Wrapper(const std::string device_path) {
    
    
  std::unique_ptr<V4L2Gralloc> gralloc(V4L2Gralloc::NewV4L2Gralloc());
  if (!gralloc) {
    
    
    HAL_LOGE("Failed to initialize gralloc helper.");
    return nullptr;
  }

  return new V4L2Wrapper(device_path, std::move(gralloc));
}

V4L2Wrapper::V4L2Wrapper(const std::string device_path,
                         std::unique_ptr<V4L2Gralloc> gralloc)
    : device_path_(std::move(device_path)),
      gralloc_(std::move(gralloc)),
      connection_count_(0) {
    
     HAL_LOG_ENTER(); }

构造 v4l2_wrapper 对象时,同时创建 V4L2Gralloc 对象出来,在 V4L2Wrapper类中定义如下:

// The underlying gralloc module.
  std::unique_ptr<V4L2Gralloc> gralloc_;

至于 V4L2Gralloc 类我们暂时不做展开梳理,属于显卡相关内容。在后面记录实践相关文章中,会有涉及到。

第二步 获取该摄像头配置参数

GetV4L2Metadata(v4l2_wrapper, &metadata) 通过第一步构建的对象 v4l2_wrapper 来获取摄像头
配置参数,把参数存储到 std::unique_ptr metadata 对象中; 源码内容如下:
@hardware/libhardware/modules/camera/3_4/v4l2_metadata_factory.cpp

int GetV4L2Metadata(std::shared_ptr<V4L2Wrapper> device,
                    std::unique_ptr<Metadata>* result) {
    
    
  
  //> 1. 调用 V4L2Wrapper::Connection(device) 连接摄像头
  V4L2Wrapper::Connection temp_connection = V4L2Wrapper::Connection(device);
  
  //  创建 PartialMetadataSet 对象
  PartialMetadataSet components;

  //> 2. 查询摄像头参数、并配置缺省参数
  components.insert(NoEffectMenuControl<uint8_t>( ...... ));

  // TODO(b/30510395): subcomponents of 3A.
  // In general, default to ON/AUTO since they imply pretty much nothing,
  // while OFF implies guarantees about not hindering performance.
  components.insert(std::unique_ptr<PartialMetadataInterface>( ...... ));

  // TODO(b/30921166): V4L2_CID_AUTO_EXPOSURE_BIAS is an int menu, so
  // this will be falling back to NoEffect until int menu support is added.
  components.insert(V4L2ControlOrDefault<int32_t>( ...... ));

  components.insert(std::unique_ptr<PartialMetadataInterface>( ...... ));
  
  // TODO(b/31021522): Autofocus subcomponent.
  components.insert( NoEffectMenuControl<uint8_t>( ...... ));

  //> 省略部分代码
  ......

  // TODO(b/30510395): subcomponents of scene modes
  // (may itself be a subcomponent of 3A).
  // Modes from each API that don't match up:
  // Android: FACE_PRIORITY, ACTION, NIGHT_PORTRAIT, THEATRE, STEADYPHOTO,
  // BARCODE, HIGH_SPEED_VIDEO.
  // V4L2: BACKLIGHT, DAWN_DUSK, FALL_COLORS, TEXT.

  //> 3. 此处调用 new EnumConverter() 构造函数
  components.insert(V4L2ControlOrDefault<uint8_t>(
      ControlType::kMenu,
      ANDROID_CONTROL_SCENE_MODE,
      ANDROID_CONTROL_AVAILABLE_SCENE_MODES,
      device,
      V4L2_CID_SCENE_MODE,
      std::shared_ptr<ConverterInterface<uint8_t, int32_t>>(new EnumConverter( ...... )),
      ANDROID_CONTROL_SCENE_MODE_DISABLED));

  //> 省略部分代码
  ......

  // "LIMITED devices are strongly encouraged to use a non-negative value.
  // If UNKNOWN is used here then app developers do not have a way to know
  // when sensor settings have been applied." - Unfortunately, V4L2 doesn't
  // really help here either. Could even be that adjusting settings mid-stream
  // blocks in V4L2, and should be avoided.
  components.insert(
      std::unique_ptr<PartialMetadataInterface>(new Property<int32_t>(
          ANDROID_SYNC_MAX_LATENCY, ANDROID_SYNC_MAX_LATENCY_UNKNOWN)));
  // Never know when controls are synced.
  components.insert(FixedState<int64_t>(ANDROID_SYNC_FRAME_NUMBER,
                                        ANDROID_SYNC_FRAME_NUMBER_UNKNOWN));

  // Metadata is returned in a single result; not multiple pieces.
  components.insert(std::make_unique<Property<int32_t>>(
      ANDROID_REQUEST_PARTIAL_RESULT_COUNT, 1));

  //> 4. 给设备配置属性内容
  int res =
      AddFormatComponents(device, std::inserter(components, components.end()));
  if (res) {
    
    
    HAL_LOGE("Failed to initialize format components.");
    return res;
  }

  *result = std::make_unique<Metadata>(std::move(components));
  return 0;
}

函数中标注4个部分,

第1处、连接摄像头、

调用 V4L2Wrapper::Connection(device)构造函数,代码如下:

// Helper class to ensure all opened connections are closed.
  class Connection {
    
    
   public:
    Connection(std::shared_ptr<V4L2Wrapper> device)
        : device_(std::move(device)), connect_result_(device_->Connect()) {
    
    }

    ~Connection() {
    
    
      if (connect_result_ == 0) {
    
    
        device_->Disconnect();
      }
    }
    // Check whether the connection succeeded or not.
    inline int status() const {
    
     return connect_result_; }

   private:
    std::shared_ptr<V4L2Wrapper> device_;
    const int connect_result_;
  };

构造函数中调用 device_->Connect() 方法,就是V4L2Wrapper的 Connect方法如下:

int V4L2Wrapper::Connect() {
    
    
  HAL_LOG_ENTER();
  std::lock_guard<std::mutex> lock(connection_lock_);

  if (connected()) {
    
    
    HAL_LOGV("Camera device %s is already connected.", device_path_.c_str());
    ++connection_count_;
    return 0;
  }

  // Open in nonblocking mode (DQBUF may return EAGAIN).
  int fd = TEMP_FAILURE_RETRY(open(device_path_.c_str(), O_RDWR | O_NONBLOCK));
  if (fd < 0) {
    
    
    HAL_LOGE("failed to open %s (%s)", device_path_.c_str(), strerror(errno));
    return -ENODEV;
  }
  device_fd_.reset(fd);
  ++connection_count_;

  // Check if this connection has the extended control query capability.
  v4l2_query_ext_ctrl query;
  query.id = V4L2_CTRL_FLAG_NEXT_CTRL | V4L2_CTRL_FLAG_NEXT_COMPOUND;
  extended_query_supported_ = (IoctlLocked(VIDIOC_QUERY_EXT_CTRL, &query) == 0);

  // TODO(b/29185945): confirm this is a supported device.
  // This is checked by the HAL, but the device at device_path_ may
  // not be the same one that was there when the HAL was loaded.
  // (Alternatively, better hotplugging support may make this unecessary
  // by disabling cameras that get disconnected and checking newly connected
  // cameras, so Connect() is never called on an unsupported camera)
  return 0;
}

Connect方法读取摄像头是否支持 VIDIOC_QUERY_EXT_CTRL 功能。

第2处、查询摄像头参数、并配置缺省值

@hardware/libhardware/modules/camera/3_4/metadata/partial_metadata_factory.h

template <typename T>
std::unique_ptr<Control<T>> V4L2Control(
    ControlType type,
    int32_t delegate_tag,
    int32_t options_tag,
    std::shared_ptr<V4L2Wrapper> device,
    int control_id,
    std::shared_ptr<ConverterInterface<T, int32_t>> converter,
    std::map<int, T> default_values) {
    
    
  HAL_LOG_ENTER();

  //> 查询摄像头设备参数 Query the device.
  v4l2_query_ext_ctrl control_query;
  int res = device->QueryControl(control_id, &control_query);
  if (res) {
    
    
    HAL_LOGE("Failed to query control %d.", control_id);
    return nullptr;
  }

  int32_t control_min = static_cast<int32_t>(control_query.minimum);
  int32_t control_max = static_cast<int32_t>(control_query.maximum);
  int32_t control_step = static_cast<int32_t>(control_query.step);
  if (control_min > control_max) {
    
    
    HAL_LOGE("No acceptable values (min %d is greater than max %d).",
             control_min,
             control_max);
    return nullptr;
  }

  // Variables needed by the various switch statements.
  std::vector<T> options;
  T metadata_val;
  T metadata_min;
  T metadata_max;
  // Set up the result converter and result options based on type.
  std::shared_ptr<ConverterInterface<T, int32_t>> result_converter(converter);
  std::unique_ptr<ControlOptionsInterface<T>> result_options;
  switch (control_query.type) {
    
    
    case V4L2_CTRL_TYPE_BOOLEAN:
      if (type != ControlType::kMenu) {
    
    
        HAL_LOGE(
            "V4L2 control %d is of type %d, which isn't compatible with "
            "desired metadata control type %d",
            control_id,
            control_query.type,
            type);
        return nullptr;
      }

      // Convert each available option,
      // ignoring ones without a known conversion.
      for (int32_t i = control_min; i <= control_max; i += control_step) {
    
    
        res = converter->V4L2ToMetadata(i, &metadata_val);
        if (res == -EINVAL) {
    
    
          HAL_LOGV("V4L2 value %d for control %d has no metadata equivalent.",
                   i,
                   control_id);
          continue;
        } else if (res) {
    
    
          HAL_LOGE("Error converting value %d for control %d.", i, control_id);
          return nullptr;
        }
        options.push_back(metadata_val);
      }
      // Check to make sure there's at least one option.
      if (options.empty()) {
    
    
        HAL_LOGE("No valid options for control %d.", control_id);
        return nullptr;
      }

      result_options.reset(new MenuControlOptions<T>(options, default_values));
      // No converter changes necessary.
      break;
    case V4L2_CTRL_TYPE_INTEGER:
      if (type != ControlType::kSlider) {
    
    
        HAL_LOGE(
            "V4L2 control %d is of type %d, which isn't compatible with "
            "desired metadata control type %d",
            control_id,
            control_query.type,
            type);
        return nullptr;
      }

      // Upgrade to a range/step-clamping converter.
      result_converter.reset(new RangedConverter<T, int32_t>(
          converter, control_min, control_max, control_step));

      // Convert the min and max.
      res = result_converter->V4L2ToMetadata(control_min, &metadata_min);
      if (res) {
    
    
        HAL_LOGE(
            "Failed to convert V4L2 min value %d for control %d to metadata.",
            control_min,
            control_id);
        return nullptr;
      }
      res = result_converter->V4L2ToMetadata(control_max, &metadata_max);
      if (res) {
    
    
        HAL_LOGE(
            "Failed to convert V4L2 max value %d for control %d to metadata.",
            control_max,
            control_id);
        return nullptr;
      }
      result_options.reset(new SliderControlOptions<T>(
          metadata_min, metadata_max, default_values));
      break;
    default:
      HAL_LOGE("Control %d (%s) is of unsupported type %d",
               control_id,
               control_query.name,
               control_query.type);
      return nullptr;
  }

  // Construct the control.
  return std::make_unique<Control<T>>(
      std::make_unique<TaggedControlDelegate<T>>(
          delegate_tag,
          std::make_unique<V4L2ControlDelegate<T>>(
              device, control_id, result_converter)),
      std::make_unique<TaggedControlOptions<T>>(options_tag,
                                                std::move(result_options)));
}

此类做元数据的转换,不用做太多梳理。

第4处 给 device 配置参数

@hardware/libhardware/modules/camera/3_4/format_metadata_factory.cpp

int AddFormatComponents(
    std::shared_ptr<V4L2Wrapper> device,
    std::insert_iterator<PartialMetadataSet> insertion_point) {
    
    
    HAL_LOG_ENTER();

    //> 1. 获取硬件摄像头参数 Get all supported formats.
    std::set<int32_t> hal_formats;
    int res = GetHalFormats(device, &hal_formats);
    
    // Requirements check: need to support YCbCr_420_888, JPEG,
    //> 2. 获取 Frame 尺寸 Get the available sizes for this format.
    std::set<std::array<int32_t, 2>> frame_sizes;
    res = device->GetFormatFrameSizes(v4l2_format, &frame_sizes);

    //> 3. 获取所有的frame_size 支持的fps周期值
    for (const auto& frame_size : frame_sizes) {
    
    
      // Note the format and size combination in stream configs.
      stream_configs.push_back(
          {
    
    {
    
    hal_format,
            frame_size[0],
            frame_size[1],
            ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS_OUTPUT}});

      //> 获取FPS值 Find the duration range for this format and size.
      std::array<int64_t, 2> duration_range;
      res = device->GetFormatFrameDurationRange(
          v4l2_format, frame_size, &duration_range);
      if (res) {
    
    
        HAL_LOGE(
            "Failed to get frame duration range for format %d, "
            "size %u x %u",
            v4l2_format,
            frame_size[0],
            frame_size[1]);
        return res;
      }
      int64_t size_min_frame_duration = duration_range[0];
      int64_t size_max_frame_duration = duration_range[1];
      min_frame_durations.push_back({
    
    {
    
    hal_format,
                                      frame_size[0],
                                      frame_size[1],
                                      size_min_frame_duration}});

      // Note the stall duration for this format and size.
      // Usually 0 for non-jpeg, non-zero for JPEG.
      // Randomly choosing absurd 1 sec for JPEG. Unsure what this breaks.
      int64_t stall_duration = 0;
      if (hal_format == HAL_PIXEL_FORMAT_BLOB) {
    
    
        stall_duration = 1000000000;
      }
      stall_durations.push_back(
          {
    
    {
    
    hal_format, frame_size[0], frame_size[1], stall_duration}});

      // Update our search for general min & max frame durations.
      // In theory max frame duration (min frame rate) should be consistent
      // between all formats, but we check and only advertise the smallest
      // available max duration just in case.
      if (size_max_frame_duration < min_max_frame_duration) {
    
    
        min_max_frame_duration = size_max_frame_duration;
      }
      // We only care about the largest min frame duration
      // (smallest max frame rate) for YUV sizes.
      if (hal_format == HAL_PIXEL_FORMAT_YCbCr_420_888 &&
          size_min_frame_duration > max_min_frame_duration_yuv) {
    
    
        max_min_frame_duration_yuv = size_min_frame_duration;
      }
    }
  }

  // Convert from frame durations measured in ns.
  // Min fps supported by all formats.
  int32_t min_fps = 1000000000 / min_max_frame_duration;
  if (min_fps > 15) {
    
    
    HAL_LOGE("Minimum FPS %d is larger than HAL max allowable value of 15",
             min_fps);
    return -ENODEV;
  }
  // Max fps supported by all YUV formats.
  int32_t max_yuv_fps = 1000000000 / max_min_frame_duration_yuv;
  // ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES should be at minimum
  // {mi, ma}, {ma, ma} where mi and ma are min and max frame rates for
  // YUV_420_888. Min should be at most 15.
  std::vector<std::array<int32_t, 2>> fps_ranges;
  fps_ranges.push_back({
    
    {
    
    min_fps, max_yuv_fps}});

  std::array<int32_t, 2> video_fps_range;
  int32_t video_fps = 30;
  if (video_fps >= max_yuv_fps) {
    
    
    video_fps_range = {
    
    {
    
    max_yuv_fps, max_yuv_fps}};
  } else {
    
    
    video_fps_range = {
    
    {
    
    video_fps, video_fps}};
  }
  fps_ranges.push_back(video_fps_range);

  // Construct the metadata components.
  insertion_point = std::make_unique<Property<ArrayVector<int32_t, 4>>>(
      ANDROID_SCALER_AVAILABLE_STREAM_CONFIGURATIONS,
      std::move(stream_configs));
  insertion_point = std::make_unique<Property<ArrayVector<int64_t, 4>>>(
      ANDROID_SCALER_AVAILABLE_MIN_FRAME_DURATIONS,
      std::move(min_frame_durations));
  insertion_point = std::make_unique<Property<ArrayVector<int64_t, 4>>>(
      ANDROID_SCALER_AVAILABLE_STALL_DURATIONS, std::move(stall_durations));
  insertion_point = std::make_unique<Property<int64_t>>(
      ANDROID_SENSOR_INFO_MAX_FRAME_DURATION, min_max_frame_duration);
  // TODO(b/31019725): This should probably not be a NoEffect control.
  insertion_point = NoEffectMenuControl<std::array<int32_t, 2>>(
      ANDROID_CONTROL_AE_TARGET_FPS_RANGE,
      ANDROID_CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES,
      fps_ranges,
      {
    
    {
    
    CAMERA3_TEMPLATE_VIDEO_RECORD, video_fps_range},
       {
    
    OTHER_TEMPLATES, fps_ranges[0]}});

  return 0;
}

此函数中也标注 3 处

第 4.1 处 获取硬件摄像头参数

源码调用关系:
==> GetHalFormats(device, &hal_formats)
==> device->GetFormats(&v4l2_formats)
==> V4L2Wrapper::GetFormats(std::set<uint32_t>* v4l2_formats)
最终调用到 V4L2Wrapper 类的 GetFormats() 方法,内容如下:

int V4L2Wrapper::GetFormats(std::set<uint32_t>* v4l2_formats) {
    
    
  HAL_LOG_ENTER();

  v4l2_fmtdesc format_query;
  memset(&format_query, 0, sizeof(format_query));
  // TODO(b/30000211): multiplanar support.
  format_query.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
  //> 查询 VIDIOC_ENUM_FMT 参数
  while (IoctlLocked(VIDIOC_ENUM_FMT, &format_query) >= 0) {
    
    
    v4l2_formats->insert(format_query.pixelformat);
    ++format_query.index;
  }

  if (errno != EINVAL) {
    
    
    HAL_LOGE(
        "ENUM_FMT fails at index %d: %s", format_query.index, strerror(errno));
    return -ENODEV;
  }
  return 0;
}

第 4.2 处 获取 Frame 尺寸

函数调用关系:
==> device->GetFormatFrameSizes(v4l2_format, &frame_sizes);
==> V4L2Wrapper::GetFormatFrameSizes()
源码如下:

int V4L2Wrapper::GetFormatFrameSizes(uint32_t v4l2_format,
                                     std::set<std::array<int32_t, 2>>* sizes) {
    
    
  v4l2_frmsizeenum size_query;
  memset(&size_query, 0, sizeof(size_query));

  size_query.pixel_format = v4l2_format;
  //> 查询 VIDIOC_ENUM_FRAMESIZES 参数
  if (IoctlLocked(VIDIOC_ENUM_FRAMESIZES, &size_query) < 0) {
    
    
    HAL_LOGE("ENUM_FRAMESIZES failed: %s", strerror(errno));
    return -ENODEV;
  }

  if (size_query.type == V4L2_FRMSIZE_TYPE_DISCRETE) {
    
    
    // Discrete: enumerate all sizes using VIDIOC_ENUM_FRAMESIZES.
    // Assuming that a driver with discrete frame sizes has a reasonable number
    // of them.
    do {
    
    
      sizes->insert({
    
    {
    
    {
    
    static_cast<int32_t>(size_query.discrete.width),
                       static_cast<int32_t>(size_query.discrete.height)}}});
      ++size_query.index;
    } while (IoctlLocked(VIDIOC_ENUM_FRAMESIZES, &size_query) >= 0);
    if (errno != EINVAL) {
    
    
      HAL_LOGE("ENUM_FRAMESIZES fails at index %d: %s",
               size_query.index,
               strerror(errno));
      return -ENODEV;
    }
  } else {
    
    
    // Continuous/Step-wise: based on the stepwise struct returned by the query.
    // Fully listing all possible sizes, with large enough range/small enough
    // step size, may produce far too many potential sizes. Instead, find the
    // closest to a set of standard sizes.
    for (const auto size : kStandardSizes) {
    
    
      // Find the closest size, rounding up.
      uint32_t desired_width = size[0];
      uint32_t desired_height = size[1];
      if (desired_width < size_query.stepwise.min_width ||
          desired_height < size_query.stepwise.min_height) {
    
    
        HAL_LOGV("Standard size %u x %u is too small for format %d",
                 desired_width,
                 desired_height,
                 v4l2_format);
        continue;
      } else if (desired_width > size_query.stepwise.max_width &&
                 desired_height > size_query.stepwise.max_height) {
    
    
        HAL_LOGV("Standard size %u x %u is too big for format %d",
                 desired_width,
                 desired_height,
                 v4l2_format);
        continue;
      }

      // Round up.
      uint32_t width_steps = (desired_width - size_query.stepwise.min_width +
                              size_query.stepwise.step_width - 1) /
                             size_query.stepwise.step_width;
      uint32_t height_steps = (desired_height - size_query.stepwise.min_height +
                               size_query.stepwise.step_height - 1) /
                              size_query.stepwise.step_height;
      sizes->insert(
          {
    
    {
    
    {
    
    static_cast<int32_t>(size_query.stepwise.min_width +
                                  width_steps * size_query.stepwise.step_width),
             static_cast<int32_t>(size_query.stepwise.min_height +
                                  height_steps *
                                      size_query.stepwise.step_height)}}});
    }
  }
  return 0;
}

第 4.3 处 获取所有 frame_size 对应的 fps 值

==>device->GetFormatFrameDurationRange(v4l2_format, frame_size, &duration_range);
==>V4L2Wrapper::GetFormatFrameDurationRange(uint32_t v4l2_format,const std::array<int32_t, 2>& size,std::array<int64_t, 2>* duration_range)
源码如下:

int V4L2Wrapper::GetFormatFrameDurationRange(
    uint32_t v4l2_format,
    const std::array<int32_t, 2>& size,
    std::array<int64_t, 2>* duration_range) {
    
    
  // Potentially called so many times logging entry is a bad idea.

  v4l2_frmivalenum duration_query;
  memset(&duration_query, 0, sizeof(duration_query));
  duration_query.pixel_format = v4l2_format;
  duration_query.width = size[0];
  duration_query.height = size[1];
  //> 查询 VIDIOC_ENUM_FRAMEINTERVALS 参数
  if (IoctlLocked(VIDIOC_ENUM_FRAMEINTERVALS, &duration_query) < 0) {
    
    
    HAL_LOGE("ENUM_FRAMEINTERVALS failed: %s", strerror(errno));
    return -ENODEV;
  }

  int64_t min = std::numeric_limits<int64_t>::max();
  int64_t max = std::numeric_limits<int64_t>::min();
  if (duration_query.type == V4L2_FRMSIZE_TYPE_DISCRETE) {
    
    
    // Discrete: enumerate all durations using VIDIOC_ENUM_FRAMEINTERVALS.
    do {
    
    
      min = std::min(min, FractToNs(duration_query.discrete));
      max = std::max(max, FractToNs(duration_query.discrete));
      ++duration_query.index;
    } while (IoctlLocked(VIDIOC_ENUM_FRAMEINTERVALS, &duration_query) >= 0);
    if (errno != EINVAL) {
    
    
      HAL_LOGE("ENUM_FRAMEINTERVALS fails at index %d: %s",
               duration_query.index,
               strerror(errno));
      return -ENODEV;
    }
  } else {
    
    
    // Continuous/Step-wise: simply convert the given min and max.
    min = FractToNs(duration_query.stepwise.min);
    max = FractToNs(duration_query.stepwise.max);
  }
  (*duration_range)[0] = min;
  (*duration_range)[1] = max;
  return 0;
}

此部分给摄像头配置参数时、调用 v4l2 ioctl 接口来获取参数。以此对比 v4L2loopback 的 struct v4l2_ioctl_ops 的结构体中,
成员变量是否有响应的函数。

第三步 构造 V4L2Camera 对象

通过第一步构建 v4l2_wrapper()对象 和 第二步构建的 metadata 对象,来构建 V4L2Camera 对象,

new V4L2Camera(id, std::move(v4l2_wrapper), std::move(metadata))

该构造类内容如下: @hardware/libhardware/modules/camera/3_4/v4l2_camera.cpp

V4L2Camera::V4L2Camera(int id,
                       std::shared_ptr<V4L2Wrapper> v4l2_wrapper,
                       std::unique_ptr<Metadata> metadata)
    : default_camera_hal::Camera(id),
      device_(std::move(v4l2_wrapper)),
      metadata_(std::move(metadata)),
      max_input_streams_(0),
      max_output_streams_({
    
    {
    
    0, 0, 0}}),
      buffer_enqueuer_(new FunctionThread(
          std::bind(&V4L2Camera::enqueueRequestBuffers, this))),
      buffer_dequeuer_(new FunctionThread(
          std::bind(&V4L2Camera::dequeueRequestBuffers, this))) {
    
    
  HAL_LOG_ENTER();
}

通过 id、v4l2_wrapper和 metadata 构造出该摄像头对象,通过 id 创建 default_camera_hal::Camera 对象, 并存放到
std::vector<std::unique_ptr<default_camera_hal::Camera>> mCameras 容器中;Camera 的构造函数如下:

Camera::Camera(int id)
  : mId(id),
    mSettingsSet(false),
    mBusy(false),
    mCallbackOps(NULL),
    mInFlightTracker(new RequestTracker)
{
    
    
    memset(&mTemplates, 0, sizeof(mTemplates));
    memset(&mDevice, 0, sizeof(mDevice));
    mDevice.common.tag    = HARDWARE_DEVICE_TAG;
    mDevice.common.version = CAMERA_DEVICE_API_VERSION_3_4;
    mDevice.common.close  = close_device;
    mDevice.ops           = const_cast<camera3_device_ops_t*>(&sOps);
    mDevice.priv          = this;
}

用智能指针引用 v4l2_wrapper和 metadata 对象, 并启动 buffer_enqueuer_ 和 buffer_dequeuer_ 两个线程,来处理frame队列。

这个部分是 Camera 对象构建过程梳理、属于设备初始化部分内容,当初始化成功后、用户就可以使用 Camera 的拍照和录像功能,下面是
拍照过程的梳理。

关于摄像头参数配置,可参考链接:
https://www.cnblogs.com/sky-heaven/p/9548494.html
https://blog.csdn.net/xxxxxlllllxl/article/details/22074151

open Camera 过程

摄像头设备打开是HAL_MODULE_INFO_SYM模块的 .methods 方法,过程如下

//> 1 HAL_MODULE_INFO_SYM
static int open_dev(const hw_module_t* module,
                    const char* name,
                    hw_device_t** device) {
  return gCameraHAL.openDevice(module, name, device);
}
//> 2
int V4L2CameraHAL::openDevice(const hw_module_t* module,
                              const char* name,
                              hw_device_t** device) {
  HAL_LOG_ENTER();

  if (module != &HAL_MODULE_INFO_SYM.common) {
    HAL_LOGE(
        "Invalid module %p expected %p", module, &HAL_MODULE_INFO_SYM.common);
    return -EINVAL;
  }

  int id;
  if (!android::base::ParseInt(name, &id, 0, getNumberOfCameras() - 1)) {
    return -EINVAL;
  }
  // TODO(b/29185945): Hotplugging: return -EINVAL if unplugged.
  return mCameras[id]->openDevice(module, device);
}

//> 3. 具体执行函数
int Camera::openDevice(const hw_module_t *module, hw_device_t **device)
{
    ALOGI("%s:%d: Opening camera device", __func__, mId);
    ATRACE_CALL();
    android::Mutex::Autolock al(mDeviceLock);

    if (mBusy) {
        ALOGE("%s:%d: Error! Camera device already opened", __func__, mId);
        return -EBUSY;
    }

    int connectResult = connect();
    if (connectResult != 0) {
      return connectResult;
    }
    mBusy = true;
    mDevice.common.module = const_cast<hw_module_t*>(module);
    *device = &mDevice.common;
    return 0;
}

连接上摄像头后、就是打开此摄像头设备;摄像头连接过程我们暂时不做分析,调用程序如下:

int V4L2Camera::connect() {
    
    
  HAL_LOG_ENTER();

  if (connection_) {
    
    
    HAL_LOGE("Already connected. Please disconnect and try again.");
    return -EIO;
  }

  connection_.reset(new V4L2Wrapper::Connection(device_));
  if (connection_->status()) {
    
    
    HAL_LOGE("Failed to connect to device.");
    return connection_->status();
  }

  // TODO(b/29185945): confirm this is a supported device.
  // This is checked by the HAL, but the device at |device_|'s path may
  // not be the same one that was there when the HAL was loaded.
  // (Alternatively, better hotplugging support may make this unecessary
  // by disabling cameras that get disconnected and checking newly connected
  // cameras, so connect() is never called on an unsupported camera)

  // TODO(b/29158098): Inform service of any flashes that are no longer
  // available because this camera is in use.
  return 0;
}

我们看到 open(device_path_.c_str(), O_RDWR | O_NONBLOCK) 以非阻塞方式成功打开该摄像头;查询摄像头驱动是否支持
VIDIOC_QUERY_EXT_CTRL功能。
我们在回到

Camera Capture 过程

拍照或录像本质是把 camera output_buffer 内容、发送到frameworks层,该层接收到消息通知时,在调用 用户空间的
回调方法,我们姑且认为是这样逻辑,我将在《谁在调用 v4l2_camera_HAL 摄像头驱动》博文中分析此部分。HAL层涉及到
的源码内容如下:

static int process_capture_request(const camera3_device_t *dev,
        camera3_capture_request_t *request)
{
    
    
    return camdev_to_camera(dev)->processCaptureRequest(request);
}

//> CaptureRequest 
int Camera::processCaptureRequest(camera3_capture_request_t *temp_request)
{
    
    
    int res;
    // TODO(b/32917568): A capture request submitted or ongoing during a flush
    // should be returned with an error; for now they are mutually exclusive.
    android::Mutex::Autolock al(mFlushLock);

    ATRACE_CALL();

    if (temp_request == NULL) {
    
    
        ALOGE("%s:%d: NULL request recieved", __func__, mId);
        return -EINVAL;
    }

    // Make a persistent copy of request, since otherwise it won't live
    // past the end of this method.
    std::shared_ptr<CaptureRequest> request = std::make_shared<CaptureRequest>(temp_request);

    ALOGV("%s:%d: frame: %d", __func__, mId, request->frame_number);

    if (!mInFlightTracker->CanAddRequest(*request)) {
    
    
        // Streams are full or frame number is not unique.
        ALOGE("%s:%d: Can not add request.", __func__, mId);
        return -EINVAL;
    }

    // Null/Empty indicates use last settings
    if (request->settings.isEmpty() && !mSettingsSet) {
    
    
        ALOGE("%s:%d: NULL settings without previous set Frame:%d",
              __func__, mId, request->frame_number);
        return -EINVAL;
    }

    if (request->input_buffer != NULL) {
    
    
        ALOGV("%s:%d: Reprocessing input buffer %p", __func__, mId,
              request->input_buffer.get());
    } else {
    
    
        ALOGV("%s:%d: Capturing new frame.", __func__, mId);
    }

    if (!isValidRequestSettings(request->settings)) {
    
    
        ALOGE("%s:%d: Invalid request settings.", __func__, mId);
        return -EINVAL;
    }

    // Pre-process output buffers.
    if (request->output_buffers.size() <= 0) {
    
    
        ALOGE("%s:%d: Invalid number of output buffers: %d", __func__, mId,
              request->output_buffers.size());
        return -EINVAL;
    }
    for (auto& output_buffer : request->output_buffers) {
    
    
        res = preprocessCaptureBuffer(&output_buffer);       //> 同步 output_buffer 时间
        if (res)
            return -ENODEV;
    }

    // Add the request to tracking.
    if (!mInFlightTracker->Add(request)) {
    
    
        ALOGE("%s:%d: Failed to track request for frame %d.",
              __func__, mId, request->frame_number);
        return -ENODEV;
    }

    // Valid settings have been provided (mSettingsSet is a misnomer;
    // all that matters is that a previous request with valid settings
    // has been passed to the device, not that they've been set).
    mSettingsSet = true;

    // Send the request off to the device for completion.
    enqueueRequest(request);

    // Request is now in flight. The device will call completeRequest
    // asynchronously when it is done filling buffers and metadata.
    return 0;
}

output_buffer 内容准备好后、把request内容送入到队列中,源码内容如下:

int V4L2Camera::enqueueRequest(
    std::shared_ptr<default_camera_hal::CaptureRequest> request) {
    
    
  HAL_LOG_ENTER();

  // Assume request validated before calling this function.
  // (For now, always exactly 1 output buffer, no inputs).
  {
    
    
    std::lock_guard<std::mutex> guard(request_queue_lock_);
    request_queue_.push(request);
    requests_available_.notify_one();
  }

  return 0;
}

enqueueRequest() 方法中把 request 对象 push 到队列后,发送 requests_available_ 条件变量,
通知等待该条件变量线程源码如下:

bool V4L2Camera::enqueueRequestBuffers() {
    
    
  // Get a request from the queue (blocks this thread until one is available).
  std::shared_ptr<default_camera_hal::CaptureRequest> request =
      dequeueRequest();

  // Assume request validated before being added to the queue
  // (For now, always exactly 1 output buffer, no inputs).

  // Setting and getting settings are best effort here,
  // since there's no way to know through V4L2 exactly what
  // settings are used for a buffer unless we were to enqueue them
  // one at a time, which would be too slow.

  // Set the requested settings
  int res = metadata_->SetRequestSettings(request->settings);
  if (res) {
    
    
    HAL_LOGE("Failed to set settings.");
    completeRequest(request, res);
    return true;
  }

  // Replace the requested settings with a snapshot of
  // the used settings/state immediately before enqueue.
  res = metadata_->FillResultMetadata(&request->settings);
  if (res) {
    
    
    // Note: since request is a shared pointer, this may happen if another
    // thread has already decided to complete the request (e.g. via flushing),
    // since that locks the metadata (in that case, this failing is fine,
    // and completeRequest will simply do nothing).
    HAL_LOGE("Failed to fill result metadata.");
    completeRequest(request, res);
    return true;
  }

  // Actually enqueue the buffer for capture.
  {
    
    
    std::lock_guard<std::mutex> guard(in_flight_lock_);

    uint32_t index;
    res = device_->EnqueueBuffer(&request->output_buffers[0], &index);
    if (res) {
    
    
      HAL_LOGE("Device failed to enqueue buffer.");
      completeRequest(request, res);
      return true;
    }

    // Make sure the stream is on (no effect if already on).
    res = device_->StreamOn();
    if (res) {
    
    
      HAL_LOGE("Device failed to turn on stream.");
      // Don't really want to send an error for only the request here,
      // since this is a full device error.
      // TODO: Should trigger full flush.
      return true;
    }

    // Note: the request should be dequeued/flushed from the device
    // before removal from in_flight_.
    in_flight_.emplace(index, request);
    buffers_in_flight_.notify_one();
  }

  return true;
}

该线程调用 device_->EnqueueBuffer(&request->output_buffers[0], &index);让Camera把buffer放入队列中,

int V4L2Wrapper::EnqueueBuffer(const camera3_stream_buffer_t* camera_buffer,
                               uint32_t* enqueued_index) {
    
    
  if (!format_) {
    
    
    HAL_LOGE("Stream format must be set before enqueuing buffers.");
    return -ENODEV;
  }

  // Find a free buffer index. Could use some sort of persistent hinting
  // here to improve expected efficiency, but buffers_.size() is expected
  // to be low enough (<10 experimentally) that it's not worth it.
  int index = -1;
  {
    
    
    std::lock_guard<std::mutex> guard(buffer_queue_lock_);
    for (int i = 0; i < buffers_.size(); ++i) {
    
    
      if (!buffers_[i]) {
    
    
        index = i;
        break;
      }
    }
  }
  if (index < 0) {
    
    
    // Note: The HAL should be tracking the number of buffers in flight
    // for each stream, and should never overflow the device.
    HAL_LOGE("Cannot enqueue buffer: stream is already full.");
    return -ENODEV;
  }

  // Set up a v4l2 buffer struct.
  v4l2_buffer device_buffer;
  memset(&device_buffer, 0, sizeof(device_buffer));
  device_buffer.type = format_->type();
  device_buffer.index = index;

  // Use QUERYBUF to ensure our buffer/device is in good shape,
  // and fill out remaining fields.
  if (IoctlLocked(VIDIOC_QUERYBUF, &device_buffer) < 0) {
    
    
    HAL_LOGE("QUERYBUF fails: %s", strerror(errno));
    return -ENODEV;
  }

  // Lock the buffer for writing (fills in the user pointer field).
  int res =
      gralloc_->lock(camera_buffer, format_->bytes_per_line(), &device_buffer);
  if (res) {
    
    
    HAL_LOGE("Gralloc failed to lock buffer.");
    return res;
  }
  if (IoctlLocked(VIDIOC_QBUF, &device_buffer) < 0) {
    
    
    HAL_LOGE("QBUF fails: %s", strerror(errno));
    gralloc_->unlock(&device_buffer);
    return -ENODEV;
  }

  // Mark the buffer as in flight.
  std::lock_guard<std::mutex> guard(buffer_queue_lock_);
  buffers_[index] = true;

  if (enqueued_index) {
    
    
    *enqueued_index = index;
  }
  return 0;
}

当准备好buffer内容后,程序发送 buffers_in_flight_.notify_one(); 条件变量,通知等待读取buffer的V4L2Camera::dequeueRequestBuffers()线程。

bool V4L2Camera::dequeueRequestBuffers() {
  // Dequeue a buffer.
  uint32_t result_index;
  int res = device_->DequeueBuffer(&result_index);
  if (res) {
    if (res == -EAGAIN) {
      // EAGAIN just means nothing to dequeue right now.
      // Wait until something is available before looping again.
      std::unique_lock<std::mutex> lock(in_flight_lock_);
      while (in_flight_.empty()) {
        buffers_in_flight_.wait(lock);
      }
    } else {
      HAL_LOGW("Device failed to dequeue buffer: %d", res);
    }
    return true;
  }

  // Find the associated request and complete it.
  std::lock_guard<std::mutex> guard(in_flight_lock_);
  auto index_request = in_flight_.find(result_index);
  if (index_request != in_flight_.end()) {
    completeRequest(index_request->second, 0);
    in_flight_.erase(index_request);
  } else {
    HAL_LOGW(
        "Dequeued non in-flight buffer index %d. "
        "This buffer may have been flushed from the HAL but not the device.",
        index_request->first);
  }
  return true;
}

该线程调用 device_->DequeueBuffer(&result_index);读取摄像头buffer内容;

int V4L2Wrapper::DequeueBuffer(uint32_t* dequeued_index) {
    
    
  if (!format_) {
    
    
    HAL_LOGV(
        "Format not set, so stream can't be on, "
        "so no buffers available for dequeueing");
    return -EAGAIN;
  }

  v4l2_buffer buffer;
  memset(&buffer, 0, sizeof(buffer));
  buffer.type = format_->type();
  buffer.memory = V4L2_MEMORY_USERPTR;
  int res = IoctlLocked(VIDIOC_DQBUF, &buffer);
  if (res) {
    
    
    if (errno == EAGAIN) {
    
    
      // Expected failure.
      return -EAGAIN;
    } else {
    
    
      // Unexpected failure.
      HAL_LOGE("DQBUF fails: %s", strerror(errno));
      return -ENODEV;
    }
  }

  // Mark the buffer as no longer in flight.
  {
    
    
    std::lock_guard<std::mutex> guard(buffer_queue_lock_);
    buffers_[buffer.index] = false;
  }

  // Now that we're done painting the buffer, we can unlock it.
  res = gralloc_->unlock(&buffer);
  if (res) {
    
    
    HAL_LOGE("Gralloc failed to unlock buffer after dequeueing.");
    return res;
  }

  if (dequeued_index) {
    
    
    *dequeued_index = buffer.index;
  }
  return 0;
}

总结:
(1). 用户程序调用拍照时,首先是调用 processCaptureRequest() 处理请求时,将产生发送条件变量通知 enqueueRequestBuffers()线程,
该线程接收到条件变量后,就开始从 摄像头设备读环形buffer内容;

(2). 读取成功后发送 buffers_in_flight_ 条件变量,通知 dequeueRequestBuffers() 线程、该线程接收到条件变量后、
调用 completeRequest(index_request->second, 0); 执行回调函数 mCallbackOps->process_capture_result(mCallbackOps, &result);
把buffer内容以形参方式、传递给用户空间的回调函数。以此形成拍照的数据流走向。

(3). 当用户程序连续发送CaptureRequest就形成录像的数据流走向,在android用户空间中、是通过设置模式方式来实现拍照或录像,如果仅发送一次
就是拍照行为,如果连续发送就是录像行为。

completeRequest 函数调用流程如下:

--> void Camera::completeRequest(std::shared_ptr<CaptureRequest> request, int err)
----> sendResult(request);
------> void Camera::sendResult(std::shared_ptr<CaptureRequest> request)
--------> mCallbackOps->process_capture_result(mCallbackOps, &result);

camera stream on/off

开启摄像头流是在 enqueueRequestBuffers() 中调用 device_->StreamOn() 中开启摄像头流,
在 V4L2Camera::disconnect() 函数中关闭流 device_->StreamOff();

至此,我们把 V4L2CameraHAL 的驱动程序做简单总结:
(1). v4l2_camera_HAL.cpp 是 camera hal驱动程序入口,此部分注意是起到 wrapper 作用,填充 camera_module_t HAL_MODULE_INFO_SYM 结构体
相关接口内容后,通过 gCameraHAL.openDevice(module, name, device) 打开摄像头后,其他功能都在 v4l2_camera.cpp 中实现。

(2). v4l2_camera.cpp 是摄像头hal驱动具体实现,它继承了 camera.cpp中统一摄像头属性和方法;同时引入 v4l2_wrapper 和 capture_request 两个类,
v4l2_wrapper作为管理摄像头接口类,capture_request作为管理与用户拍照、录像交互对象。

(3). android 用户空间拍照请求,所引发的数据流向、我们姑且假定上述推演过程,后面将分析在 media.camera 守护线程时、来验证此推演过程上的误差。
通过v4l2_camera驱动底层接口使用逻辑、此数据流推演结果基本没有问题。

猜你喜欢

转载自blog.csdn.net/weixin_38387929/article/details/126164446