V4l2 video output implementation process

Realize the function

The device side obtains the data of the camera sensor and transmits it to the host computer through the UVC protocol. At the same time, the host computer sends control commands to the device side.

Reference source code: https://github.com/wlhe/uvc-gadget

1. Concept

UVC: is a USB video device driver. Used to support USB video devices, any camera with USB interface can support

V4L2: It is a video capture and output framework under Linux. Used to unify the interface and provide API to the application layer

The relationship between UVC and V4L2: V4L2 is used to manage UVC devices and can provide some video-related application program interfaces. There are a lot of open source software on Linux systems that can support V4L2. The common ones are FFmpeg, opencv, Skype, Mplayer and so on.

2. Specific process

2.1 Turn on the video device

Everything in Linux is a file, first open the device file to output video data, if it is /dev/video18

dev->fd = open("/dev/video18", O_RDWR | O_NONBLOCK);

Open a device file in a non-blocking manner. When starting, the driver will first output the initialization data in the cache to the host computer through the device, and then wait for the video data to fill the cache.

2.2 Get the properties of the video device

struct v4l2_capability cap;

ret = ioctl(dev->fd, VIDIOC_QUERYCAP, &cap);

Use the VIDIOC_QUERYCAP command to obtain various attributes of the current device, and check the device's support for various functions. Here, we mainly focus on if(cap.capabilities & V4L2_CAP_VIDEO_OUTPUT), that is, whether the device has the function of video output.

2.3 Other configurations

2.3.1 Device output format fmt query

struct v4l2_fmtdesc fmtdesc;

fmtdesc.index = 0; //查询格式序号

fmtdesc.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

ioctl(dev->fd, VIDIOC_ENUM_FMT, &fmtdesc);

Use VIDIOC_ENUM_FMT to list all image formats supported by the device

2.3.2 Obtain the cropcap capability and set the output image crop

Obtain:

struct v4l2_cropcap cropcap;

memset(&cropcap, 0, sizeof(cropcap));

cropcap.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

ioctl(dev->fd, VIDIOC_CROPCAP, &cropcap);//查询驱动的修剪能力

set up:

struct v4l2_crop crop;

crop.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

crop.c.top = g_display_top; // 0

crop.c.left = g_display_left; // 0

crop.c.width = g_display_width; // 显示宽度

crop.c.height = g_display_height; // 显示高度

ioctl(dev->fd, VIDIOC_S_CROP, &crop);

2.3.3 Output video format fmt setting

struct v4l2_format fmt;

memset(&fmt, 0, sizeof(fmt));

fmt.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

fmt.fmt.pix.width= g_in_width;

fmt.fmt.pix.height= g_in_height;

fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_UYVY;

fmt.fmt.pix.bytesperline = g_in_width;

fmt.fmt.pix.priv = 0;

fmt.fmt.pix.sizeimage = 0;

ioctl(dev->fd, VIDIOC_S_FMT, &fmt);

ioctl(dev->fd, VIDIOC_G_FMT, &fmt);

Set the video data format of the video device, such as setting the length, width and image format of the video image data (JPEG, YUYV format)

If the video device driver does not support the image format you set, the video driver will re-modify the value of the struct v4l2_format structure variable to the image format supported by the video device, so in the program design, set all the video formats After that, to get the actual video format, re-read the struct v4l2_format structure variable

2.4 Initialize and subscribe to UVC events

2.4.1 Initialize flow control probe and commit data structure

uvc_streaming_control probe;

uvc_streaming_control commit;

Data structure uvc_streaming_control:

2.4.2 Subscribe to events

struct v4l2_event_subscription sub;

memset(&sub, 0, sizeof sub);

sub.type = UVC_EVENT_CONNECT;

ioctl(dev->fd, VIDIOC_SUBSCRIBE_EVENT, &sub);

sub.type = UVC_EVENT_DISCONNECT;

ioctl(dev->fd, VIDIOC_SUBSCRIBE_EVENT, &sub);

sub.type = UVC_EVENT_SETUP;

ioctl(dev->fd, VIDIOC_SUBSCRIBE_EVENT, &sub);

sub.type = UVC_EVENT_DATA;

ioctl(dev->fd, VIDIOC_SUBSCRIBE_EVENT, &sub);

sub.type = UVC_EVENT_STREAMON;

ioctl(dev->fd, VIDIOC_SUBSCRIBE_EVENT, &sub);

sub.type = UVC_EVENT_STREAMOFF;

ioctl(dev->fd, VIDIOC_SUBSCRIBE_EVENT, &sub);

Set the connect, disconnect, setup, data, streamon, and streamoff of UVC events into the driver through VIDIOC_SUBSCRIBE_EVENT (or register, subscribe), and the host computer interacts with v4l2 through UVC events.

2.5 while loop waiting for event to trigger

fd_set efds;

FD_ZERO(&efds);

FD_SET(dev->fd, &efds);

ret = select(dev->fd + 1, NULL, NULL, &efds, &tv);

Block, waiting for efds signal

struct v4l2_event v4l2_event;

struct uvc_event *uvc_event = (void *)&v4l2_event.u.data;

ret = ioctl(dev->fd, VIDIOC_DQEVENT, &v4l2_event);

When a subscribed event arrives, trigger the efds signal and get the event information.

2.6 Main Event Handling

2.6.1 UVC_EVENT_SETUP

Process the UVC_EVENT_SETUP event requested by the host: divided into standard USB commands and class commands

USB_TYPE_STANDARD: No processing, this part of the driver only needs to respond, no additional information is required

USB_TYPE_CLASS: Class commands are divided into control and streaming, which are the two interfaces VC and VS related to the UVC protocol.

1) VS interface processing (reference function: uvc_events_process_streaming)

Flow control command: (refer to Section 4.3 of UVC 1.5 Class specification)

Parameter Description:

  • bmRequestType request type, refer to the standard USB protocol
  • bRequest subclass, defined in Table A-8
  • CS, Control Selector, defined in Table A-16, such as probe or commit
  • The high byte of wIndex is 0, and the low byte is the interface number
  • wLength and Data are the same as the standard USB protocol, which is the data length and data

The process of parameter setting requires negotiation between the host and the USB device, and the negotiation process is roughly shown in the figure below:

Flow Description:

  1. Host first sends the desired settings to the USB device (PROBE)
  2. The device modifies the Host expectation setting within its own capability and returns it to the Host(PROBE)
  3. If Host thinks the settings are feasible, Commit (COMMIT)
  4. Set the current setting of the interface to a certain setting

2) VC interface processing (reference function: uvc_events_process_control)

There are many units and terminals inside the VC interface to control the camera, for example, we can set white balance, exposure and so on through the Process unit.

Reference structure definition:

struct uvc_camera_terminal camera_terminal;

struct uvc_processing_unit processing_unit;

2.6.2 UVC_EVENT_DATA

Process the UVC_EVENT_DATA event requested by the host: pass parameter information

VS command: probe and commit

VC commands: exposure, white balance, brightness, contrast, etc.

According to the request, update the local parameters.

2.6.3 UVC_EVENT_STREAMON

Process the UVC_EVENT_STREAMON event requested by the host: configure the cache space and start the video streaming transmission

1) Configure cache space

The operating system generally divides the memory used by the system into user space and kernel space, which are managed by the application program and the operating system respectively. The application program can directly access the address of the memory, while the kernel space stores the code and data accessed by the kernel, and the user cannot directly access it.

The data cached by V4l2 is stored in the kernel space, which means that the user cannot directly access this segment of memory, and must use some means to convert the address. Mainly use memory mapping and user pointer mode.

Memory mapping method: map the memory in the device to the memory space in the application program, and directly process the device memory.

User pointer mode: The memory segment is allocated by the application itself. This requires setting the memory field to V4L2_MEMORY_USERPTR in v4l2_requestbuffers.

struct v4l2_requestbuffers {

__u32 count;

__u32 type; /* enum v4l2_buf_type */

__u32 memory; /* enum v4l2_memory */

__u32 reserved[2];

};

count: the number of buffers to apply for

memory: either V4L2_MEMORY_MMAP or V4L2_MEMORY_USERPTR

Here we mainly talk about V4L2_MEMORY_USERPTR mode:

a. Apply to the driver for the frame buffer of the video stream data.

struct v4l2_requestbuffers rb;

memset(&rb, 0, sizeof rb);

rb.count = nbufs;

rb.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

rb.memory = V4L2_MEMORY_USERPTR;//V4L2_MEMORY_MMAP;

ret = ioctl(dev->fd, VIDIOC_REQBUFS, &rb);

b. The user applies for a memory segment and adds it to the output buffer queue

struct v4l2_buffer buf;

for (i = 0; i < dev->nbufs; ++i)

{

    memset(&buf, 0, sizeof buf);

    buf.index = i;

    buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

    buf.memory = V4L2_MEMORY_USERPTR;

    buf.length = MAX_BUFFER_SIZE;

    buf.m.userptr = (unsigned long)dev->dummy_buf[i].start;//用户空间

    ret = ioctl(dev->fd, VIDIOC_QBUF, &buf);

}

The control command VIDIOC_QBUF puts buf to the end of the output buffer queue one by one. Apply for several frame buffers, generally no less than 3 nbufs.

Note: test and verify that the space dev->dummy_buf[i].start applied by the user should be initialized to 0, otherwise the video data cannot be transmitted normally

v4l2_buffer 数据结构:

__u32 index; // 应用程序来设定,仅仅用来申明是哪个 buffer

__u32 type;

__u32 bytesused; //buffer 中已经使用的 byte 数,如果是 input stream 由 driver 来设 定,相反则由应用程序来设定

__u32 flags; // 定义了 buffer 的一些标志位,来表明这个 buffer 处在哪个队列,比如输入 队列或者输出队列 (V4L2_BUF_FLAG_QUEUED ,V4L2_BUF_FLAG_DONE) , 是否 关键帧等等

__u32 memory; //V4L2_MEOMORY_MMAP / V4L2_MEMORY_USERPTR / V4L2_MEMORY_OVERLAY

union m :

__u32 offset; // 当 memory 类型是 V4L2_MEOMORY_MMAP 的时候,主要用来表明 buffer 在 device momory 中相对起始位置的偏移,主要用在 mmap() 参数 中,对应用程序没有左右

unsigned long userptr; // 当 memory 类型是 V4L2_MEMORY_USERPTR 的时候,这是 一个指向虚拟内存中 buffer 的指针,由应用程序来设定。

__u32 length; //buffer 的 size

There are two buffer queues managed inside the driver, an input queue and an output queue.

For the capture device, when the buffer in the input queue is filled with data, it will automatically become an output queue, wait for calling VIDIOC_DQBUF to process the data, and then call VIDIOC_QBUF again to put the buffer back into the input queue;

For the output device, after the buffer is displayed (or read), it will automatically become an output queue. Wait to call VIDIOC_DQBUF, after the buffer is filled with data, call VIDIOC_QBUF to put the buffer back into the input queue;

2) Start the collection of video stream data.

int type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

ioctl (fd_v4l, VIDIOC_STREAMON, &type)

2.6.4 UVC_EVENT_STREAMOFF

Handle host's UVC_EVENT_STREAMOFF request

a. Stop video stream output

int type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

ioctl(dev->fd, VIDIOC_STREAMOFF, &type);

b. Release the kernel cache

nbufs = 0;//设置为0

memset(&rb, 0, sizeof rb);

rb.count = nbufs;

rb.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

rb.memory = V4L2_MEMORY_USERPTR;//V4L2_MEMORY_MMAP;

ret = ioctl(dev->fd, VIDIOC_REQBUFS, &rb);

2.7 Video data output

2.7.1 while loop waiting for write ready signal

a. When a wfds signal arrives, it means that the output queue can be written

ret = select(dev->fd + 1, NULL, &wfds, NULL, &tv);//&wfds

if(ret > 0)

{

    ret = uvc_video_process(dev);

}

b. Obtain a writable buffer from the output queue in the video buffer, and copy the video data to the buffer.

struct v4l2_buffer buf;

memset(&buf, 0, sizeof buf);

buf.type = V4L2_BUF_TYPE_VIDEO_OUTPUT;

buf.memory = V4L2_MEMORY_USERPTR;//V4L2_MEMORY_MMAP;

buf.length = MAX_BUFFER_SIZE;

ret = ioctl(dev->fd, VIDIOC_DQBUF, &buf);

uvc_video_fill_buffer(dev, &buf);

c. Put the buffer back into the input queue of the video buffer, waiting to be read or displayed by the host computer.

ret = ioctl(dev->fd, VIDIOC_QBUF, &buf);

2.8 End Turn off the video device

close(dev->fd);

free(dev->mem); //释放用户申请的内存

Guess you like

Origin blog.csdn.net/h1527820835/article/details/124366709