V4L2 camera application programming


The ALPHA/Mini I.MX6U development board supports a variety of different cameras, including Atom's ov5640 (500W pixels),
ov2640 (200W pixels) and ov7725 (without FIFO, 30W pixels). The development board leaves the factory. These cameras can be used on the system; of course, in addition, we can also use USB cameras, just plug the USB camera directly into the USB
interface on the development board! In this chapter, we will learn camera application programming under Linux.

Introduction to V4L2

You can see that the title of our chapter is called "V4L2 Camera Application Programming", so what is V4L2? Readers who have knowledge of camera driver development under Linux should know what this means.
V4L2 is the abbreviation of Video for linux two. It is a driver framework for video devices in the Linux kernel. It provides a unified interface specification for video device driver development and application layer. So what is a video device? A very typical video device is a video capture device, such as various cameras; of course, it also includes other types of video devices, which will not be introduced here.
Devices registered using the V4L2 device driver framework will generate corresponding device node files in the /dev/ directory of the Linux system. The name of the device node is usually videoX (X standard number, 0, 1, 2, 3...), each A videoX device file represents a video device. The application configures and uses device-type devices by performing I/O operations on videoX device files. The next section will introduce it to you in detail!
Figure 25.1.1 Video class device node

V4L2 Camera Application

The V4L2 device driver framework provides a set of unified and standard interface specifications to the application layer. Applications program according to this interface specification to use the camera. For camera devices, the programming mode is as follows:

  1. The first is to open the camera device;
  2. Query the properties or functions of the device;
  3. Set device parameters, such as pixel format, frame size, and frame rate;
  4. Apply for frame buffer and memory mapping;
  5. Framebuffer enqueuing;
  6. Start video collection;
  7. Dequeue the frame buffer and process the collected data;
  8. After processing, add the frame buffer to the queue again, back and forth;
  9. End collection.
    The flow chart is as follows: As
    Insert image description here
    can be seen from the flow chart, almost all operations on the camera are completed through ioctl(), with different V4L2 instructions (request
    parameters) to request different operations. These instructions are defined in the header file linux /videodev2.h, in the camera application code, the header file linux/videodev2.h needs to be included. This header file declares many data structures and macro definitions related to camera application programming. You can open this header file to take a look. .
    In the videodev2.h header file, many ioctl() instructions are defined, which are provided in the form of macro definitions (VIDIOC_XXX), as shown below:
/*
 * I O C T L C O D E S F O R V I D E O D E V I C E S
 *
 */
#define VIDIOC_QUERYCAP _IOR('V', 0, struct v4l2_capability)
#define VIDIOC_RESERVED _IO('V', 1)
#define VIDIOC_ENUM_FMT _IOWR('V', 2, struct v4l2_fmtdesc)
#define VIDIOC_G_FMT _IOWR('V', 4, struct v4l2_format)
#define VIDIOC_S_FMT _IOWR('V', 5, struct v4l2_format)
#define VIDIOC_REQBUFS _IOWR('V', 8, struct v4l2_requestbuffers)
#define VIDIOC_QUERYBUF _IOWR('V', 9, struct v4l2_buffer)
#define VIDIOC_G_FBUF _IOR('V', 10, struct v4l2_framebuffer)
#define VIDIOC_S_FBUF _IOW('V', 11, struct v4l2_framebuffer)
#define VIDIOC_OVERLAY _IOW('V', 14, int)
#define VIDIOC_QBUF _IOWR('V', 15, struct v4l2_buffer)
#define VIDIOC_EXPBUF _IOWR('V', 16, struct v4l2_exportbuffer)
#define VIDIOC_DQBUF _IOWR('V', 17, struct v4l2_buffer)
#define VIDIOC_STREAMON _IOW('V', 18, int)
#define VIDIOC_STREAMOFF _IOW('V', 19, int)
#define VIDIOC_G_PARM _IOWR('V', 21, struct v4l2_streamparm)
#define VIDIOC_S_PARM _IOWR('V', 22, struct v4l2_streamparm)
#define VIDIOC_G_STD _IOR('V', 23, v4l2_std_id)
#define VIDIOC_S_STD _IOW('V', 24, v4l2_std_id)
#define VIDIOC_ENUMSTD _IOWR('V', 25, struct v4l2_standard)
#define VIDIOC_ENUMINPUT _IOWR('V', 26, struct v4l2_input)
#define VIDIOC_G_CTRL _IOWR('V', 27, struct v4l2_control)
#define VIDIOC_S_CTRL _IOWR('V', 28, struct v4l2_control)
#define VIDIOC_G_TUNER _IOWR('V', 29, struct v4l2_tuner)
#define VIDIOC_S_TUNER _IOW('V', 30, struct v4l2_tuner)
#define VIDIOC_G_AUDIO _IOR('V', 33, struct v4l2_audio)
#define VIDIOC_S_AUDIO _IOW('V', 34, struct v4l2_audio)
#define VIDIOC_QUERYCTRL _IOWR('V', 36, struct v4l2_queryctrl)
#define VIDIOC_QUERYMENU _IOWR('V', 37, struct v4l2_querymenu)
#define VIDIOC_G_INPUT _IOR('V', 38, int)
#define VIDIOC_S_INPUT _IOWR('V', 39, int)
#define VIDIOC_G_EDID _IOWR('V', 40, struct v4l2_edid)
#define VIDIOC_S_EDID _IOWR('V', 41, struct v4l2_edid)
#define VIDIOC_G_OUTPUT _IOR('V', 46, int)
#define VIDIOC_S_OUTPUT _IOWR('V', 47, int)
#define VIDIOC_ENUMOUTPUT _IOWR('V', 48, struct v4l2_output)
#define VIDIOC_G_AUDOUT _IOR('V', 49, struct v4l2_audioout)
#define VIDIOC_S_AUDOUT _IOW('V', 50, struct v4l2_audioout)
#define VIDIOC_G_MODULATOR _IOWR('V', 54, struct v4l2_modulator)
#define VIDIOC_S_MODULATOR _IOW('V', 55, struct v4l2_modulator)
#define VIDIOC_G_FREQUENCY _IOWR('V', 56, struct v4l2_frequency)
#define VIDIOC_S_FREQUENCY _IOW('V', 57, struct v4l2_frequency)
#define VIDIOC_CROPCAP _IOWR('V', 58, struct v4l2_cropcap)
#define VIDIOC_G_CROP _IOWR('V', 59, struct v4l2_crop)
#define VIDIOC_S_CROP _IOW('V', 60, struct v4l2_crop)
#define VIDIOC_G_JPEGCOMP _IOR('V', 61, struct v4l2_jpegcompression)
#define VIDIOC_S_JPEGCOMP _IOW('V', 62, struct v4l2_jpegcompression)
#define VIDIOC_QUERYSTD _IOR('V', 63, v4l2_std_id)
#define VIDIOC_TRY_FMT _IOWR('V', 64, struct v4l2_format)
#define VIDIOC_ENUMAUDIO _IOWR('V', 65, struct v4l2_audio)
#define VIDIOC_ENUMAUDOUT _IOWR('V', 66, struct v4l2_audioout)
#define VIDIOC_G_PRIORITY _IOR('V', 67, __u32) /* enum v4l2_priority */
#define VIDIOC_S_PRIORITY _IOW('V', 68, __u32) /* enum v4l2_priority */
#define VIDIOC_G_SLICED_VBI_CAP _IOWR('V', 69, struct v4l2_sliced_vbi_cap)
#define VIDIOC_LOG_STATUS _IO('V', 70)
#define VIDIOC_G_EXT_CTRLS _IOWR('V', 71, struct v4l2_ext_controls)
#define VIDIOC_S_EXT_CTRLS _IOWR('V', 72, struct v4l2_ext_controls)
#define VIDIOC_TRY_EXT_CTRLS _IOWR('V', 73, struct v4l2_ext_controls)
#define VIDIOC_ENUM_FRAMESIZES _IOWR('V', 74, struct v4l2_frmsizeenum)
#define VIDIOC_ENUM_FRAMEINTERVALS _IOWR('V', 75, struct v4l2_frmivalenum)
#define VIDIOC_G_ENC_INDEX _IOR('V', 76, struct v4l2_enc_idx)
#define VIDIOC_ENCODER_CMD _IOWR('V', 77, struct v4l2_encoder_cmd)
#define VIDIOC_TRY_ENCODER_CMD _IOWR('V', 78, struct v4l2_encoder_cmd)

Each different instruction macro means requesting different operations to the device. As can be seen from the above, each macro (_IOWR/_IOR/_IOW) also carries a struct data structure, such as struct v4l2_capability, struct v4l2_fmtdesc, which is The type of the third parameter that needs to be passed in when calling ioctl(); before calling ioctl(), define a variable of this type, and when calling ioctl(), pass in the pointer of the variable as the third parameter of ioctl(), for example:

struct v4l2_capability cap;
……
ioctl(fd, VIDIOC_QUERYCAP, &cap);

In actual application programming, not all instructions will be used. For video capture equipment, the following author lists some commonly used instructions:
Insert image description here

open camera

The device node corresponding to the video device is /dev/videoX, X is a number, usually starting from 0; the first step in camera application programming is to open the device, call open to get the file descriptor fd, as follows:

int fd = -1;
/* 打开摄像头*/
fd = open("/dev/video0", O_RDWR);
if (0 > fd)
{
    
    
    fprintf(stderr, "open error: %s: %s\n", "/dev/video0", strerror(errno));
    return -1;
}

When opening a device file, you need to use O_RDWR to specify read and write permissions.

Query the properties/capabilities/functions of the device

After opening the device, you need to query the device's properties to determine whether the device is a video capture device and other properties. How to query? Naturally, it is implemented through the ioctl() function. ioctl() is a very important system call for device files. All operations involving configuring devices, obtaining device configurations, etc. will be completed using ioctl. In the previous chapters, we will We've seen it before; but for ordinary files, ioctl() is almost useless.
To query the properties of the device, the command used is VIDIOC_QUERYCAP, as shown below:
ioctl(int fd, VIDIOC_QUERYCAP, struct v4l2_capability *cap);
At this time, a struct v4l2_capability type data will be obtained through ioctl(). The struct v4l2_capability data structure describes the device Some properties of , the structure definition is as follows:

struct v4l2_capability
{
    
    
    __u8 driver[16];    /* 驱动的名字*/
    __u8 card[32];      /* 设备的名字*/
    __u8 bus_info[32];  /* 总线的名字*/
    __u32 version;      /* 版本信息*/
    __u32 capabilities; /* 设备拥有的能力*/
    __u32 device_caps;
    __u32 reserved[3]; /* 保留字段*/
};

What we focus on is the capabilities field, which describes the capabilities of the device. The value of this field is as follows (it can be any one of the following values ​​or a bit or relationship of multiple values):

/* Values for 'capabilities' field */
#define V4L2_CAP_VIDEO_CAPTURE 0x00000001        /* Is a video capture device */
#define V4L2_CAP_VIDEO_OUTPUT 0x00000002         /* Is a video output device */
#define V4L2_CAP_VIDEO_OVERLAY 0x00000004        /* Can do video overlay */
#define V4L2_CAP_VBI_CAPTURE 0x00000010          /* Is a raw VBI capture device */
#define V4L2_CAP_VBI_OUTPUT 0x00000020           /* Is a raw VBI output device */
#define V4L2_CAP_SLICED_VBI_CAPTURE 0x00000040   /* Is a sliced VBI capture device */
#define V4L2_CAP_SLICED_VBI_OUTPUT 0x00000080    /* Is a sliced VBI output device */
#define V4L2_CAP_RDS_CAPTURE 0x00000100          /* RDS data capture */
#define V4L2_CAP_VIDEO_OUTPUT_OVERLAY 0x00000200 /* Can do video output overlay */
#define V4L2_CAP_HW_FREQ_SEEK 0x00000400         /* Can do hardware frequency seek */
#define V4L2_CAP_RDS_OUTPUT 0x00000800           /* Is an RDS encoder */
/* Is a video capture device that supports multiplanar formats */
#define V4L2_CAP_VIDEO_CAPTURE_MPLANE 0x00001000
/* Is a video output device that supports multiplanar formats */
#define V4L2_CAP_VIDEO_OUTPUT_MPLANE 0x00002000
/* Is a video mem-to-mem device that supports multiplanar formats */
#define V4L2_CAP_VIDEO_M2M_MPLANE 0x00004000
/* Is a video mem-to-mem device */
#define V4L2_CAP_VIDEO_M2M 0x00008000
#define V4L2_CAP_TUNER 0x00010000          /* has a tuner */
#define V4L2_CAP_AUDIO 0x00020000          /* has audio support */
#define V4L2_CAP_RADIO 0x00040000          /* is a radio device */
#define V4L2_CAP_MODULATOR 0x00080000      /* has a modulator */
#define V4L2_CAP_SDR_CAPTURE 0x00100000    /* Is a SDR capture device */
#define V4L2_CAP_EXT_PIX_FORMAT 0x00200000 /* Supports the extended pixel format */
#define V4L2_CAP_SDR_OUTPUT 0x00400000     /* Is a SDR output device */
#define V4L2_CAP_META_CAPTURE 0x00800000   /* Is a metadata capture device */
#define V4L2_CAP_READWRITE 0x01000000      /* read/write systemcalls */
#define V4L2_CAP_ASYNCIO 0x02000000        /* async I/O */
#define V4L2_CAP_STREAMING 0x04000000      /* streaming I/O ioctls */
#define V4L2_CAP_TOUCH 0x10000000          /* Is a touch device */
#define V4L2_CAP_DEVICE_CAPS 0x80000000    /* sets device capabilities field */

These macros are defined in the videodev2.h header file, you can check it out yourself. For a camera device, its capabilities
field must contain V4L2_CAP_VIDEO_CAPTURE, indicating that it supports the video capture function. So we can determine
whether it is a camera device by judging whether the capabilities field contains V4L2_CAP_VIDEO_CAPTURE, for example:

/* 查询设备功能*/
ioctl(fd, VIDIOC_QUERYCAP, &vcap);
/* 判断是否是视频采集设备*/
if (!(V4L2_CAP_VIDEO_CAPTURE & vcap.capabilities))
{
    
    
    fprintf(stderr, "Error: No capture video device!\n");
    return -1;
}

Set frame format and frame rate

A camera usually supports multiple different pixel formats, such as RGB, YUYV, and compression format MJPEG, etc., and also supports multiple different video capture resolutions, such as 640 480, 320 240, 1280*720, etc., in addition , the same resolution may also support multiple different video capture frame rates (15fps, 30fps). Therefore, it is usually necessary to set these parameters in the application before performing video capture.
a) Enumerate all pixel formats supported by the camera: VIDIOC_ENUM_FMT
To set the pixel format, you must first know which pixel formats the device supports. How to know? Use VIDIOC_ENUM_FMT instruction:
ioctl(int fd, VIDIOC_ENUM_FMT, struct v4l2_fmtdesc *fmtdesc);
Use VIDIOC_ENUM_FMT to enumerate all pixel formats supported by the device. When calling ioctl(), you need to pass in a struct v4l2_fmtdesc *pointer, and ioctl() will obtain The obtained data is written to the object pointed to by the fmtdesc pointer. The struct v4l2_fmtdesc structure describes information related to the pixel format. Let's take a look at the definition of the struct v4l2_fmtdesc structure:

/*
 * F O R M A T E N U M E R A T I O N
 */
struct v4l2_fmtdesc
{
    
    
    __u32 index; /* Format number */
    __u32 type;  /* enum v4l2_buf_type */
    __u32 flags;
    __u8 description[32]; /* Description string */
    __u32 pixelformat;    /* Format fourcc */
    __u32 reserved[4];
};

index represents the number, which needs to be set to 0 before enumeration, and then its value is increased by 1 after each ioctl() call. An ioctl() call can only obtain information of one pixel format. If the device supports multiple pixel formats, it needs to be called multiple times in a loop, controlled by index. The index starts from 0 and
increases by 1 after calling ioctl() once, until If the ioctl() call fails, it means that all pixel formats have been enumerated; so index is a number, and the pixel format specified by the index number is obtained.
The description field is a simple descriptive string that simply describes the pixelformat pixel format.
The pixelformat field is the corresponding pixel format number, which is an unsigned 32-bit data. Each pixel format will be
represented by a u32 type data, as shown below:

/* RGB formats */
#define V4L2_PIX_FMT_RGB332 v4l2_fourcc('R', 'G', 'B', '1')  /* 8 RGB-3-3-2 */
#define V4L2_PIX_FMT_RGB444 v4l2_fourcc('R', '4', '4', '4')  /* 16 xxxxrrrr ggggbbbb */
#define V4L2_PIX_FMT_ARGB444 v4l2_fourcc('A', 'R', '1', '2') /* 16 aaaarrrr ggggbbbb */
#define V4L2_PIX_FMT_XRGB444 v4l2_fourcc('X', 'R', '1', '2') /* 16 xxxxrrrr ggggbbbb */
#define V4L2_PIX_FMT_RGB555 v4l2_fourcc('R', 'G', 'B', 'O')  /* 16 RGB-5-5-5 */
#define V4L2_PIX_FMT_ARGB555 v4l2_fourcc('A', 'R', '1', '5') /* 16 ARGB-1-5-5-5 */
#define V4L2_PIX_FMT_XRGB555 v4l2_fourcc('X', 'R', '1', '5') /* 16 XRGB-1-5-5-5 */
#define V4L2_PIX_FMT_RGB565 v4l2_fourcc('R', 'G', 'B', 'P')  /* 16 RGB-5-6-5 */
......
/* Grey formats */
#define V4L2_PIX_FMT_GREY v4l2_fourcc('G', 'R', 'E', 'Y') /* 8 Greyscale */
#define V4L2_PIX_FMT_Y4 v4l2_fourcc('Y', '0', '4', ' ')   /* 4 Greyscale */
#define V4L2_PIX_FMT_Y6 v4l2_fourcc('Y', '0', '6', ' ')   /* 6 Greyscale */
#define V4L2_PIX_FMT_Y10 v4l2_fourcc('Y', '1', '0', ' ')  /* 10 Greyscale */
    ......
/* Luminance+Chrominance formats */
#define V4L2_PIX_FMT_YUYV v4l2_fourcc('Y', 'U', 'Y', 'V') /* 16 YUV 4:2:2 */
#define V4L2_PIX_FMT_YYUV v4l2_fourcc('Y', 'Y', 'U', 'V') /* 16 YUV 4:2:2 */
#define V4L2_PIX_FMT_YVYU v4l2_fourcc('Y', 'V', 'Y', 'U') /* 16 YVU 4:2:2 */
#define V4L2_PIX_FMT_UYVY v4l2_fourcc('U', 'Y', 'V', 'Y') /* 16 YUV 4:2:2 */
    ......
/* compressed formats */
#define V4L2_PIX_FMT_MJPEG v4l2_fourcc('M', 'J', 'P', 'G') /* Motion-JPEG */
#define V4L2_PIX_FMT_JPEG v4l2_fourcc('J', 'P', 'E', 'G')  /* JFIF JPEG */
#define V4L2_PIX_FMT_DV v4l2_fourcc('d', 'v', 's', 'd')    /* 1394 */
#define V4L2_PIX_FMT_MPEG v4l2_fourcc('M', 'P', 'E', 'G')  /* MPEG-1/2/4 Multiplexed */

The above list is only part of it. The space is limited and all pixel formats cannot be listed. You can check the
videodev2.h header file yourself.
You can see that there is a v4l2_fourcc macro at the back, which is actually a u32 type data synthesized through this macro and corresponding parameters .
The type field specifies the type, indicating which function of the device we want to obtain the pixel format corresponding to, because some devices may support both video capture and video output and other functions; the possible values ​​of the type field are as follows:

enum v4l2_buf_type
{
    
    
    V4L2_BUF_TYPE_VIDEO_CAPTURE = 1, // 视频采集
    V4L2_BUF_TYPE_VIDEO_OUTPUT = 2,  // 视频输出
    V4L2_BUF_TYPE_VIDEO_OVERLAY = 3,
    V4L2_BUF_TYPE_VBI_CAPTURE = 4,
    V4L2_BUF_TYPE_VBI_OUTPUT = 5,
    V4L2_BUF_TYPE_SLICED_VBI_CAPTURE = 6,
    V4L2_BUF_TYPE_SLICED_VBI_OUTPUT = 7,
    V4L2_BUF_TYPE_VIDEO_OUTPUT_OVERLAY = 8,
    V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE = 9,
    V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE = 10,
    V4L2_BUF_TYPE_SDR_CAPTURE = 11,
    V4L2_BUF_TYPE_SDR_OUTPUT = 12,
    V4L2_BUF_TYPE_META_CAPTURE = 13,
    /* Deprecated, do not use */
    V4L2_BUF_TYPE_PRIVATE = 0x80,
};

The type field needs to set its value before calling ioctl(). For the camera, the type field needs to be set to
V4L2_BUF_TYPE_VIDEO_CAPTURE, specifying that what we are going to get is the pixel format of the video capture.
Usage example is as follows:

struct v4l2_fmtdesc fmtdesc;
/* 枚举出摄像头所支持的所有像素格式以及描述信息*/
fmtdesc.index = 0;
fmtdesc.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
while (0 == ioctl(fd, VIDIOC_ENUM_FMT, &fmtdesc))
{
    
    
    printf("fmt: %s <0x%x>\n", fmtdesc.description, fmtdesc.pixelformat);
    fmtdesc.index++;
}

b) Enumerate all video capture resolutions supported by the camera: VIDIOC_ENUM_FRAMESIZES
Use the VIDIOC_ENUM_FRAMESIZES command to enumerate all video capture resolutions supported by the device. The usage is as follows:

ioctl(int fd, VIDIOC_ENUM_FRAMESIZES, struct v4l2_frmsizeenum *frmsize);

Calling ioctl() requires passing in a struct v4l2_frmsizeenum * pointer. ioctl() will write the obtained data into the object pointed to by the frmsize pointer. The struct v4l2_frmsizeenum structure describes information related to the video frame size. Let’s take a look at the definition of the struct v4l2_frmsizeenum structure:

struct v4l2_frmsizeenum
{
    
    
    __u32 index;        /* Frame size number */
    __u32 pixel_format; /* 像素格式*/
    __u32 type;         /* type */
    union
    {
    
     /* Frame size */
        struct v4l2_frmsize_discrete discrete;
        struct v4l2_frmsize_stepwise stepwise;
    };
    __u32 reserved[2]; /* Reserved space for future use */
};
struct v4l2_frmsize_discrete
{
    
    
    __u32 width;  /* Frame width [pixel] */
    __u32 height; /* Frame height [pixel] */
};

The index field has the same meaning as the index field of the struct v4l2_fmtdesc structure. A camera usually supports multiple different video capture resolutions. One ioctl() call can only get one video frame size information. If the device supports multiple video frame sizes, It needs to be called multiple times in a loop, controlled by index.
The pixel_format field specifies the pixel format, and the type field has the same meaning as the type field of the struct v4l2_fmtdesc structure; before calling ioctl(), you need to set the type field and the pixel_format field first to determine what we are going to enumerate: which function of the device, Which pixel format supports the video frame size.
You can see that there is a union in the struct v4l2_frmsizeenum structure. In the case of type= V4L2_BUF_TYPE_VIDEO_CAPTURE, discrete takes effect. This is a struct v4l2_frmsize_discrete type variable that describes the video frame size information (including the width and height of the video frame), that is, the video Collection resolution size.
For example, we want to enumerate all video frame sizes supported by the camera's RGB565 pixel format:

struct v4l2_frmsizeenum frmsize;
frmsize.index = 0;
frmsize.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
frmsize.pixel_format = V4L2_PIX_FMT_RGB565;
while (0 == ioctl(fd, VIDIOC_ENUM_FRAMESIZES, &frmsize))
{
    
    
    printf("frame_size<%d*%d>\n", frmsize.discrete.width, frmsize.discrete.height);
    frmsize.index++;
}

c) Enumerate all video capture frame rates supported by the camera: VIDIOC_ENUM_FRAMEINTERVALS
For the same video frame size, the camera may support multiple different video capture frame rates, such as the common 15fps, 30fps, 45fps and 60fps, etc.; use the VIDIOC_ENUM_FRAMEINTERVALS command to Enumerate all frame rates supported by the device and use them as follows:

ioctl(int fd, VIDIOC_ENUM_FRAMEINTERVALS, struct v4l2_frmivalenum *frmival);

Calling ioctl() requires passing in a struct v4l2_frmivalenum * pointer. ioctl() will write the obtained data into the object pointed to by the frmival pointer. The struct v4l2_frmivalenum structure describes information related to the video frame rate. Let’s take a look at the definition of the struct v4l2_frmivalenum structure:

struct v4l2_frmivalenum
{
    
    
    __u32 index;        /* Frame format index */
    __u32 pixel_format; /* Pixel format */
    __u32 width;        /* Frame width */
    __u32 height;       /* Frame height */
    __u32 type;         /* type */
    union
    {
    
     /* Frame interval */
        struct v4l2_fract discrete;
        struct v4l2_frmival_stepwise stepwise;
    };
    __u32 reserved[2]; /* Reserved space for future use */
};
struct v4l2_fract
{
    
    
    __u32 numerator;   // 分子
    __u32 denominator; // 分母
};

The index and type fields have the same meaning as the index and type fields of the struct v4l2_frmsizeenum structure.
The width and height fields are used to specify the video frame size, and the pixel_format field specifies the pixel format.
The above fields all need to set their values ​​before calling ioctl().
You can see that the struct v4l2_frmivalenum structure also has a union. When type= V4L2_BUF_TYPE_VIDEO_CAPTURE, discrete takes effect. This is a struct v4l2_fract type variable that describes the video frame rate information (the number of times the image is collected in one second); struct v4l2_fract structure , numerator represents the numerator, denominator represents the denominator, and numerator / denominator is used to represent the image collection cycle (how many seconds it takes to collect an image), so the video frame rate is equal to denominator / numerator.
Usage example, for example, we want to enumerate all video capture frame rates supported by 640*480 frame size in RGB565 pixel format:

struct v4l2_frmivalenum frmival;
frmival.index = 0;
frmival.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
frmival.pixel_format = V4L2_PIX_FMT_RGB565;
frmival.width = 640;
frmival.height = 480;
while (0 == ioctl(fd, VIDIOC_ENUM_FRAMEINTERVALS, &frmival))
{
    
    
    printf("Frame interval<%ffps> ", frmival.discrete.denominator / frmival.discrete.numerator);
    frmival.index++;
}

d) View or set the current format: VIDIOC_G_FMT, VIDIOC_S_FMT. The
previously introduced commands only enumerate the pixel format, video frame size, video capture frame rate and other information supported by the device. We will introduce how to set these parameters below.
First, you can use the VIDIOC_G_FMT command to view the current format of the device. The usage is as follows:

int ioctl(int fd, VIDIOC_G_FMT, struct v4l2_format *fmt);

Calling ioctl() requires passing in a struct v4l2_format * pointer. ioctl() will write the obtained data into the object pointed to by the fmt pointer. The struct v4l2_format structure describes the format-related information.
Use the VIDIOC_S_FMT command to set the format of the device. The usage is as follows:

int ioctl(int fd, VIDIOC_S_FMT, struct v4l2_format *fmt);

ioctl() will use the data of the object pointed to by fmt to set the format of the device. Let's take a look at the definition of the v4l2_format structure:

struct v4l2_format
{
    
    
    __u32 type;
    union
    {
    
    
        struct v4l2_pix_format pix;           /* V4L2_BUF_TYPE_VIDEO_CAPTURE */
        struct v4l2_pix_format_mplane pix_mp; /* V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE */
        struct v4l2_window win;               /* V4L2_BUF_TYPE_VIDEO_OVERLAY */
        struct v4l2_vbi_format vbi;           /* V4L2_BUF_TYPE_VBI_CAPTURE */
        struct v4l2_sliced_vbi_format sliced; /* V4L2_BUF_TYPE_SLICED_VBI_CAPTURE */
        struct v4l2_sdr_format sdr;           /* V4L2_BUF_TYPE_SDR_CAPTURE */
        struct v4l2_meta_format meta;         /* V4L2_BUF_TYPE_META_CAPTURE */
        __u8 raw_data[200];                   /* user-defined */
    } fmt;
};

The type field still has the same meaning as the type field in the structure introduced earlier. Whether it is getting the format or setting the format, its value needs to be set before calling the ioctl() function.
Next is a union. When type is set to V4L2_BUF_TYPE_VIDEO_CAPTURE, the pix variable takes effect. It is a struct v4l2_pix_format type variable that records information related to the video frame format, as shown below:

struct v4l2_pix_format
{
    
    
    __u32 width;        // 视频帧的宽度(单位:像素)
    __u32 height;       // 视频帧的高度(单位:像素)
    __u32 pixelformat;  // 像素格式
    __u32 field;        /* enum v4l2_field */
    __u32 bytesperline; /* for padding, zero if unused */
    __u32 sizeimage;
    __u32 colorspace; /* enum v4l2_colorspace */
    __u32 priv;       /* private data, depends on pixelformat */
    __u32 flags;      /* format flags (V4L2_PIX_FMT_FLAG_*) */
    union
    {
    
    
        /* enum v4l2_ycbcr_encoding */
        __u32 ycbcr_enc;
        /* enum v4l2_hsv_encoding */
        __u32 hsv_enc;
    };
    __u32 quantization; /* enum v4l2_quantization */
    __u32 xfer_func;    /* enum v4l2_xfer_func */
};

The colorspace field describes a color space, and the possible values ​​are as follows:

enum v4l2_colorspace
{
    
    
    /*
     * Default colorspace, i.e. let the driver figure it out.
     * Can only be used with video capture.
     */
    V4L2_COLORSPACE_DEFAULT = 0,
    /* SMPTE 170M: used for broadcast NTSC/PAL SDTV */
    V4L2_COLORSPACE_SMPTE170M = 1,
    /* Obsolete pre-1998 SMPTE 240M HDTV standard, superseded by Rec 709 */
    V4L2_COLORSPACE_SMPTE240M = 2,
    /* Rec.709: used for HDTV */
    V4L2_COLORSPACE_REC709 = 3,
    /*
     * Deprecated, do not use. No driver will ever return this. This was
     * based on a misunderstanding of the bt878 datasheet.
     */
    V4L2_COLORSPACE_BT878 = 4,
    /*
     * NTSC 1953 colorspace. This only makes sense when dealing with
     * really, really old NTSC recordings. Superseded by SMPTE 170M.
     */
    V4L2_COLORSPACE_470_SYSTEM_M = 5,
    /*
     * EBU Tech 3213 PAL/SECAM colorspace. This only makes sense when
     * dealing with really old PAL/SECAM recordings. Superseded by
     * SMPTE 170M.
     */
    V4L2_COLORSPACE_470_SYSTEM_BG = 6,
    /*
     * Effectively shorthand for V4L2_COLORSPACE_SRGB, V4L2_YCBCR_ENC_601
     * and V4L2_QUANTIZATION_FULL_RANGE. To be used for (Motion-)JPEG.
     */
    V4L2_COLORSPACE_JPEG = 7,
    /* For RGB colorspaces such as produces by most webcams. */
    V4L2_COLORSPACE_SRGB = 8,
    /* AdobeRGB colorspace */
    V4L2_COLORSPACE_ADOBERGB = 9,
    /* BT.2020 colorspace, used for UHDTV. */
    V4L2_COLORSPACE_BT2020 = 10,
    /* Raw colorspace: for RAW unprocessed images */
    V4L2_COLORSPACE_RAW = 11,
    /* DCI-P3 colorspace, used by cinema projectors */
    V4L2_COLORSPACE_DCI_P3 = 12,
};

When using the VIDIOC_S_FMT instruction to set the format, the user usually does not need to specify the colorspace. The underlying driver will
determine the corresponding colorspace based on the pixel format pixelformat.
Example: Get the current format and set the format

struct v4l2_format fmt;
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (0 > ioctl(fd, VIDIOC_G_FMT, &fmt))
{
    
     // 获取格式信息
    perror("ioctl error");
    return -1;
}
printf("width:%d, height:%d format:%d\n", fmt.fmt.pix.width, fmt.fmt.pix.height, fmt.fmt.pix.pixelformat);
fmt.fmt.pix.width = 800;
fmt.fmt.pix.height = 480;
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB565;
if (0 > ioctl(fd, VIDIOC_S_FMT, &fmt))
{
    
     // 设置格式
    perror("ioctl error");
    return -1;
}

When using the instruction VIDIOC_S_FMT to set the format, the actual parameters set are not necessarily equal to the parameters we specified. For example, above we specified the video frame width as 800 and the height as 480, but the camera does not necessarily support this video frame size, or the camera does not support it.
V4L2_PIX_FMT_RGB565 is a pixel format; usually in this case, the underlying driver will not set according to the parameters we specify, it will modify these parameters. For example, if the camera does not support 800 480, then the underlying driver may It is modified to
640
480 (assuming the camera supports this resolution); therefore, when the ioctl() call returns, we also need to check the returned struct v4l2_format type
variable to determine whether the parameters we specified have taken effect:

struct v4l2_format fmt;
fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
fmt.fmt.pix.width = 800;
fmt.fmt.pix.height = 480;
fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB565;
if (0 > ioctl(fd, VIDIOC_S_FMT, &fmt))
{
    
     // 设置格式
    perror("ioctl error");
    return -1;
}
if (800 != fmt.fmt.pix.width ||
    480 != fmt.fmt.pix.height)
{
    
    
    do_something();
}
if (V4L2_PIX_FMT_RGB565 != fmt.fmt.pix.pixelformat)
{
    
    
    do_something();
}

e) Set or obtain the current stream type-related parameters: VIDIOC_G_PARM, VIDIOC_S_PARM.
Use the VIDIOC_G_PARM instruction to obtain the stream type-dependent parameters of the device. The usage method is as follows:

ioctl(int fd, VIDIOC_G_PARM, struct v4l2_streamparm *streamparm);

Calling ioctl() requires passing in a struct v4l2_streamparm * pointer. ioctl() will write the obtained data to the object pointed to by the streamparm pointer. The struct v4l2_streamparm structure describes information related to the stream type. The specific content will be discussed later. In the introduction.
Use the VIDIOC_S_PARM instruction to set the stream type related parameters of the device. The usage is as follows:

ioctl(int fd, VIDIOC_S_PARM, struct v4l2_streamparm *streamparm);

ioctl() will use the data of the object pointed to by streamparm to set the stream type related parameters of the device. Let's take a look at the definition of the struct v4l2_streamparm structure:

struct v4l2_streamparm
{
    
    
    __u32 type; /* enum v4l2_buf_type */
    union
    {
    
    
        struct v4l2_captureparm capture;
        struct v4l2_outputparm output;
        __u8 raw_data[200]; /* user-defined */
    } parm;
};
struct v4l2_captureparm
{
    
    
    __u32 capability;               /* Supported modes */
    __u32 capturemode;              /* Current mode */
    struct v4l2_fract timeperframe; /* Time per frame in seconds */
    __u32 extendedmode;             /* Driver-specific extensions */
    __u32 readbuffers;              /* # of buffers for read */
    __u32 reserved[4];
};
struct v4l2_fract
{
    
    
    __u32 numerator;   /* 分子*/
    __u32 denominator; /* 分母*/
};

The type field is the same as before and will not be introduced again. Its value needs to be set before calling ioctl().
When type= V4L2_BUF_TYPE_VIDEO_CAPTURE, the capture variable in the union takes effect. It is a struct v4l2_captureparm type variable. The struct v4l2_captureparm structure describes some parameters related to camera capture, such as video capture frame rate. The structure of this structure has been given above. definition.
In the struct v4l2_captureparm structure, the capability field indicates the modes supported by the device. The possible values ​​are as follows (the bit or relationship of any one or more of the following):

/* Flags for 'capability' and 'capturemode' fields */
#define V4L2_MODE_HIGHQUALITY 0x0001 /* High quality imaging mode 高品质成像模式*/
#define V4L2_CAP_TIMEPERFRAME 0x1000 /* timeperframe field is supported 支持设置timeperframe
字段*/

capturemode represents the current mode, which is the same as the value of the capability field.
The timeperframe field is a struct v4l2_fract structure type variable, which describes the device video collection cycle, which has been introduced to you before. You can use VIDIOC_S_PARM to set the video capture cycle, that is, the video capture frame rate. However, many devices do not support the application layer to set the timeperframe field. Only when the capability field contains V4L2_CAP_TIMEPERFRAME does it mean that the device supports the timeperframe field, so that the application layer can set the device. The video capture frame rate.
Therefore, before setting, first obtain the stream type related parameters of the device through the VIDIOC_G_PARM command and determine whether the capability field contains V4L2_CAP_TIMEPERFRAME, as shown below:

struct v4l2_streamparm streamparm;
streamparm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
ioctl(v4l2_fd, VIDIOC_G_PARM, &streamparm);
/** 判断是否支持帧率设置**/
if (V4L2_CAP_TIMEPERFRAME & streamparm.parm.capture.capability)
{
    
    
    streamparm.parm.capture.timeperframe.numerator = 1;
    streamparm.parm.capture.timeperframe.denominator = 30; // 30fps
    if (0 > ioctl(v4l2_fd, VIDIOC_S_PARM, &streamparm))
    {
    
     // 设置参数
        fprintf(stderr, "ioctl error: VIDIOC_S_PARM: %s\n", strerror(errno));
        return -1;
    }
}
else
    fprintf(stderr, "不支持帧率设置");

Apply for frame buffer and memory mapping

There are two ways to read camera data, one is the read method, which is to read the data collected by the camera directly through the read() system call; the other is the streaming method; Section 25.2.2 introduces the use of VIDIOC_QUERYCAP The command queries the properties of the device and obtains a struct v4l2_capability type data. The capabilities field records the capabilities of the device. When this field contains
V4L2_CAP_READWRITE, it means that the device supports read I/O to read data; when this field contains V4L2_CAP_STREAMING
, it means The device supports the streaming I/O method; in fact, most devices support the streaming I/O method to read data. To use the
streaming I/O method, we need to apply for a frame buffer from the device and map the frame buffer to the application process in the address space.
After completing the configuration of the device, you can apply for the frame buffer. As the name implies, the frame buffer is a buffer for storing a frame of image data. Use the VIDIOC_REQBUFS command to apply for the frame buffer. The usage method is as follows: ioctl(
int fd, VIDIOC_REQBUFS, struct v4l2_requestbuffers *reqbuf);
Calling ioctl() needs to pass in a struct v4l2_requestbuffers * pointer. The struct v4l2_requestbuffers structure describes the information of applying for frame buffer, and ioctl() will apply according to the information filled by the object pointed to by reqbuf. Let's take a look at the definition of the struct v4l2_requestbuffers structure:

**M E M O R Y - M A P P I N G B U F F E R S
                        * /
                    struct v4l2_requestbuffers
{
    
    
    __u32 count;  // 申请帧缓冲的数量
    __u32 type;   /* enum v4l2_buf_type */
    __u32 memory; /* enum v4l2_memory */
    __u32 reserved[2];
};

The type field has the same meaning as the type field mentioned earlier and will not be introduced again. Its value needs to be set before calling ioctl().
The count field is used to specify the number of frame buffers to apply for.
Possible values ​​for the memory field are as follows:

enum v4l2_memory {
    
    
	V4L2_MEMORY_MMAP = 1,
	V4L2_MEMORY_USERPTR = 2,
	V4L2_MEMORY_OVERLAY = 3,
	V4L2_MEMORY_DMABUF = 4,
};

Usually set memory to V4L2_MEMORY_MMAP! Usage examples are as follows:

struct v4l2_requestbuffers reqbuf;
reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
reqbuf.count = 3; // 申请3 个帧缓冲
reqbuf.memory = V4L2_MEMORY_MMAP;
if (0 > ioctl(fd, VIDIOC_REQBUFS, &reqbuf))
{
    
    
    fprintf(stderr, "ioctl error: VIDIOC_REQBUFS: %s\n", strerror(errno));
    return -1;
}

The streaming I/O method maintains a frame buffer queue in the kernel space. The driver writes a frame of data read from the camera to a frame buffer in the queue, and then writes the next frame of data to a frame buffer in the queue. The next frame buffer; when the application needs to read a frame of data, it needs to take out a frame buffer filled with one frame of data from the queue. This removal process is called dequeuing; when the application has finished processing this frame of data, This frame buffer needs to be added to the kernel's frame buffer queue. This process is called enqueuing! This is easy to understand. There are many such examples in reality, so I won’t give any examples here.
So it can be seen that the process of reading image data is actually a process of continuously dequeuing and entering the queue, as shown in the figure below
Figure 25.2.2 The process of reading image data at the application layer

To map the frame buffer to the process address space
, use the VIDIOC_REQBUFS instruction to apply for the frame buffer. This buffer is essentially maintained by the kernel. Applications cannot directly read the data in this buffer. We need to map it to user space, so , the application program reads the data in the mapping area is actually reading the data in the frame buffer maintained by the kernel.

Before mapping, you need to query the frame buffer information, such as the length, offset and other information of the frame buffer. Use the VIDIOC_QUERYBUF
instruction to query. The usage method is as follows:

ioctl(int fd, VIDIOC_QUERYBUF, struct v4l2_buffer *buf);

Calling ioctl() requires passing in a struct v4l2_buffer * pointer. The struct v4l2_buffer structure describes the frame buffer information. ioctl()
will write the obtained data to the object pointed to by the buf pointer. Let's take a look at the definition of the struct v4l2_buffer structure:

struct v4l2_buffer
{
    
    
    __u32 index; // buffer 的编号
    __u32 type;  // type
    __u32 bytesused;
    __u32 flags;
    __u32 field;
    struct timeval timestamp;
    struct v4l2_timecode timecode;
    __u32 sequence;
    /* memory location */
    __u32 memory;
    union
    {
    
    
        __u32 offset; // 偏移量
        unsigned long userptr;
        struct v4l2_plane *planes;
        __s32 fd;
    } m;
    __u32 length; // buffer 的长度
    __u32 reserved2;
    __u32 reserved;
};

The index field represents a number. Each of the multiple frame buffers applied for has a number, starting from 0. One ioctl() call can only obtain the information of the frame buffer corresponding to the specified number, so to obtain the information of multiple frame buffers, it needs to be called multiple times. Each time ioctl() is called, the index is incremented by 1, pointing to the next frame buffer
.
The type field has the same meaning as the type field mentioned earlier and will not be introduced again. Its value needs to be set before calling ioctl().
The memory field has the same meaning as the memory field of the struct v4l2_requestbuffers structure, and its value needs to be set before calling ioctl().
The length field represents the length of the frame buffer, and the offset in the community represents the offset of the frame buffer. How to understand this offset? Because when an application applies for a frame buffer through the VIDIOC_REQBUFS command, the kernel will apply for a memory space from the operating system as a frame buffer. The size of this memory space is equal to the number of frame buffers requested * the size of each frame buffer, and each frame buffer Corresponds to a certain segment of this memory space, so they all have an address offset.
The number of frame buffers should not be too large, especially in some embedded systems with relatively tight memory, too many frame buffers will inevitably occupy too much system memory.
Usage example: After applying for the frame buffer, call mmap() to map the frame buffer to the user address space:

struct v4l2_requestbuffers reqbuf;
struct v4l2_buffer buf;
void *frm_base[3];
reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
reqbuf.count = 3; // 申请3 个帧缓冲
reqbuf.memory = V4L2_MEMORY_MMAP;
/* 申请3 个帧缓冲*/
if (0 > ioctl(fd, VIDIOC_REQBUFS, &reqbuf))
{
    
    
    fprintf(stderr, "ioctl error: VIDIOC_REQBUFS: %s\n", strerror(errno));
    return -1;
}
/* 建立内存映射*/
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
for (buf.index = 0; buf.index < 3; buf.index++)
{
    
    
    ioctl(fd, VIDIOC_QUERYBUF, &buf);
    frm_base[buf.index] = mmap(NULL, buf.length,
                               PROT_READ | PROT_WRITE, MAP_SHARED,
                               fd, buf.m.offset);
    if (MAP_FAILED == frm_base[buf.index])
    {
    
    
        perror("mmap error");
        return -1;
    }
}

In the above example, we will map three frame buffers to user space, and save the starting address of the mapping area corresponding to each frame buffer in the frm_base array. When reading the data collected by the camera later, we will read it directly. Just map the area.

Join the team

Use the VIDIOC_QBUF command to put the frame buffer into the kernel's frame buffer queue, and use it as follows:
ioctl(int fd, VIDIOC_QBUF, struct v4l2_buffer *buf);
Before calling ioctl(), you need to set the memory and type fields of the struct v4l2_buffer type object , the usage example is as follows:
put three frame buffers into the kernel's frame buffer queue (enqueue operation):

struct v4l2_buffer buf;
/* 入队操作*/
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
for (buf.index = 0; buf.index < 3; buf.index++)
{
    
    
    if (0 > ioctl(fd, VIDIOC_QBUF, &buf))
    {
    
    
        perror("ioctl error");
        return -1;
    }
}

Start video collection

After putting the three frame buffers into the queue, you can turn on the camera and start image acquisition. Use the VIDIOC_DQBUF command to start video acquisition. The usage method is as follows:

ioctl(int fd, VIDIOC_STREAMON, int *type); //开启视频采集
ioctl(int fd, VIDIOC_STREAMOFF, int *type); //停止视频采集
type 其实一个enum v4l2_buf_type *指针,通常用法如下:
enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (0 > ioctl(fd, VIDIOC_STREAMON, &type)) {
    
    
	perror("ioctl error");
	return -1;
}

Read data, process data

After turning on the video capture, you can then read the data. As we have said before, you can directly read the mapping area of ​​each frame buffer in the user space to read each frame of image data collected by the camera. Before reading data, the frame buffer needs to be taken out of the kernel's frame buffer queue. This operation is called frame buffer dequeuing (if you enter the queue, you will naturally dequeue it). We have already introduced these theoretical knowledge to you in detail.
Use the VIDIOC_DQBUF instruction to perform the dequeuing operation. The usage method is as follows:
ioctl(int fd, VIDIOC_DQBUF, struct v4l2_buffer *buf);
After the frame buffer is dequeued, the data can be read and then processed, such as capturing by the camera. The image is displayed on the LCD
screen; after the data processing is completed, the frame buffer is enqueued, the next frame buffer in the queue is dequeued, and then the data is read and processed, and so on.
Usage examples are as follows:

struct v4l2_buffer buf;
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
for (;;)
{
    
    
    for (buf.index = 0; buf.index < 3; buf.index++)
    {
    
    
        ioctl(fd, VIDIOC_DQBUF, &buf); // 出队
        // 读取帧缓冲的映射区、获取一帧数据
        // 处理这一帧数据
        do_something();
        // 数据处理完之后、将当前帧缓冲入队、接着读取下一帧数据
        ioctl(fd, VIDIOC_QBUF, &buf);
    }
}

End video collection

If you want to end video collection, use the VIDIOC_STREAMOFF instruction, the usage has been introduced previously. An example of usage is as follows:

enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
if (0 > ioctl(fd, VIDIOC_STREAMOFF, &type)) {
    
    
	perror("ioctl error");
	return -1;
}

V4L2 camera application programming practice

Through the previous introduction, we already know how to apply camera application programming. Camera application programming is actually not difficult. Basically, you can follow a set of processes: open the device, query the device, set the format, and apply for a frame buffer. , memory mapping, joining the queue, starting video collection, leaving the queue, and processing the collected data. Although there are many steps, these steps are easy to understand. It does not make you feel that it is difficult to understand this step. Every step Each step is basically implemented through ioctl(), with different request instructions.
In this section we will write a camera application. The author hopes that everyone can complete it independently. Through the previous introduction, I believe that everyone can complete it independently. You can refer to the sample code provided by the author below appropriately: The path corresponding to the source code of this routine is
: : Development board CD->11, Linux C application programming routine source code-
>25_v4l2_camera->v4l2_camera.c.

#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <string.h>
#include <errno.h>
#include <sys/mman.h>
#include <linux/videodev2.h>
#include <linux/fb.h>
#define FB_DEV "/dev/fb0"   // LCD 设备节点
#define FRAMEBUFFER_COUNT 3 // 帧缓冲数量
/*** 摄像头像素格式及其描述信息***/
typedef struct camera_format
{
    
    
    unsigned char description[32]; // 字符串描述信息
    unsigned int pixelformat;      // 像素格式
} cam_fmt;
/*** 描述一个帧缓冲的信息***/
typedef struct cam_buf_info
{
    
    
    unsigned short *start; // 帧缓冲起始地址
    unsigned long length;  // 帧缓冲长度
} cam_buf_info;
static int width;                          // LCD 宽度
static int height;                         // LCD 高度
static unsigned short *screen_base = NULL; // LCD 显存基地址
static int fb_fd = -1;                     // LCD 设备文件描述符
static int v4l2_fd = -1;                   // 摄像头设备文件描述符
static cam_buf_info buf_infos[FRAMEBUFFER_COUNT];
static cam_fmt cam_fmts[10];
static int frm_width, frm_height; // 视频帧宽度和高度
static int fb_dev_init(void)
{
    
    
    struct fb_var_screeninfo fb_var = {
    
    0};
    struct fb_fix_screeninfo fb_fix = {
    
    0};
    unsigned long screen_size;
    /* 打开framebuffer 设备*/
    fb_fd = open(FB_DEV, O_RDWR);
    if (0 > fb_fd)
    {
    
    
        fprintf(stderr, "open error: %s: %s\n", FB_DEV, strerror(errno));
        return -1;
    }
    /* 获取framebuffer 设备信息*/
    ioctl(fb_fd, FBIOGET_VSCREENINFO, &fb_var);
    ioctl(fb_fd, FBIOGET_FSCREENINFO, &fb_fix);
    screen_size = fb_fix.line_length * fb_var.yres;
    width = fb_var.xres;
    height = fb_var.yres;
    /* 内存映射*/
    screen_base = mmap(NULL, screen_size, PROT_READ | PROT_WRITE, MAP_SHARED, fb_fd, 0);
    if (MAP_FAILED == (void *)screen_base)
    {
    
    
        perror("mmap error");
        close(fb_fd);
        return -1;
    }
    /* LCD 背景刷白*/
    memset(screen_base, 0xFF, screen_size);
    return 0;
}
static int v4l2_dev_init(const char *device)
{
    
    
    struct v4l2_capability cap = {
    
    0};
    /* 打开摄像头*/
    v4l2_fd = open(device, O_RDWR);
    if (0 > v4l2_fd)
    {
    
    
        fprintf(stderr, "open error: %s: %s\n", device, strerror(errno));
        return -1;
    }
    /* 查询设备功能*/
    ioctl(v4l2_fd, VIDIOC_QUERYCAP, &cap);
    /* 判断是否是视频采集设备*/
    if (!(V4L2_CAP_VIDEO_CAPTURE & cap.capabilities))
    {
    
    
        fprintf(stderr, "Error: %s: No capture video device!\n", device);
        close(v4l2_fd);
        return -1;
    }
    return 0;
}
static void v4l2_enum_formats(void)
{
    
    
    struct v4l2_fmtdesc fmtdesc = {
    
    0};
    /* 枚举摄像头所支持的所有像素格式以及描述信息*/
    fmtdesc.index = 0;
    fmtdesc.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    while (0 == ioctl(v4l2_fd, VIDIOC_ENUM_FMT, &fmtdesc))
    {
    
    
        // 将枚举出来的格式以及描述信息存放在数组中
        cam_fmts[fmtdesc.index].pixelformat = fmtdesc.pixelformat;
        strcpy(cam_fmts[fmtdesc.index].description, fmtdesc.description);
        fmtdesc.index++;
    }
}
static void v4l2_print_formats(void)
{
    
    
    struct v4l2_frmsizeenum frmsize = {
    
    0};
    struct v4l2_frmivalenum frmival = {
    
    0};
    int i;
    frmsize.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    frmival.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    for (i = 0; cam_fmts[i].pixelformat; i++)
    {
    
    
        printf("format<0x%x>, description<%s>\n", cam_fmts[i].pixelformat,
               cam_fmts[i].description);
        /* 枚举出摄像头所支持的所有视频采集分辨率*/
        frmsize.index = 0;
        frmsize.pixel_format = cam_fmts[i].pixelformat;
        frmival.pixel_format = cam_fmts[i].pixelformat;
        while (0 == ioctl(v4l2_fd, VIDIOC_ENUM_FRAMESIZES, &frmsize))
        {
    
    
            printf("size<%d*%d> ",
                   frmsize.discrete.width,
                   frmsize.discrete.height);
            frmsize.index++;
            /* 获取摄像头视频采集帧率*/
            frmival.index = 0;
            frmival.width = frmsize.discrete.width;
            frmival.height = frmsize.discrete.height;
            while (0 == ioctl(v4l2_fd, VIDIOC_ENUM_FRAMEINTERVALS, &frmival))
            {
    
    
                printf("<%dfps>", frmival.discrete.denominator /
                                      frmival.discrete.numerator);
                frmival.index++;
            }
            printf("\n");
        }
        printf("\n");
    }
}
static int v4l2_set_format(void)
{
    
    
    struct v4l2_format fmt = {
    
    0};
    struct v4l2_streamparm streamparm = {
    
    0};
    /* 设置帧格式*/
    fmt.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;        // type 类型
    fmt.fmt.pix.width = width;                     // 视频帧宽度
    fmt.fmt.pix.height = height;                   // 视频帧高度
    fmt.fmt.pix.pixelformat = V4L2_PIX_FMT_RGB565; // 像素格式
    if (0 > ioctl(v4l2_fd, VIDIOC_S_FMT, &fmt))
    {
    
    
        fprintf(stderr, "ioctl error: VIDIOC_S_FMT: %s\n", strerror(errno));
        return -1;
    }
    /*** 判断是否已经设置为我们要求的RGB565 像素格式
    如果没有设置成功表示该设备不支持RGB565 像素格式*/
    if (V4L2_PIX_FMT_RGB565 != fmt.fmt.pix.pixelformat)
    {
    
    
        fprintf(stderr, "Error: the device does not support RGB565 format!\n");
        return -1;
    }
    frm_width = fmt.fmt.pix.width;   // 获取实际的帧宽度
    frm_height = fmt.fmt.pix.height; // 获取实际的帧高度
    printf("视频帧大小<%d * %d>\n", frm_width, frm_height);
    /* 获取streamparm */
    streamparm.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    ioctl(v4l2_fd, VIDIOC_G_PARM, &streamparm);
    /** 判断是否支持帧率设置**/
    if (V4L2_CAP_TIMEPERFRAME & streamparm.parm.capture.capability)
    {
    
    
        streamparm.parm.capture.timeperframe.numerator = 1;
        streamparm.parm.capture.timeperframe.denominator = 30; // 30fps
        if (0 > ioctl(v4l2_fd, VIDIOC_S_PARM, &streamparm))
        {
    
    
            fprintf(stderr, "ioctl error: VIDIOC_S_PARM: %s\n", strerror(errno));
            return -1;
        }
    }
    return 0;
}
static int v4l2_init_buffer(void)
{
    
    
    struct v4l2_requestbuffers reqbuf = {
    
    0};
    struct v4l2_buffer buf = {
    
    0};
    /* 申请帧缓冲*/
    reqbuf.count = FRAMEBUFFER_COUNT; // 帧缓冲的数量
    reqbuf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    reqbuf.memory = V4L2_MEMORY_MMAP;
    if (0 > ioctl(v4l2_fd, VIDIOC_REQBUFS, &reqbuf))
    {
    
    
        fprintf(stderr, "ioctl error: VIDIOC_REQBUFS: %s\n", strerror(errno));
        return -1;
    }
    /* 建立内存映射*/
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    for (buf.index = 0; buf.index < FRAMEBUFFER_COUNT; buf.index++)
    {
    
    
        ioctl(v4l2_fd, VIDIOC_QUERYBUF, &buf);
        buf_infos[buf.index].length = buf.length;
        buf_infos[buf.index].start = mmap(NULL, buf.length,
                                          PROT_READ | PROT_WRITE, MAP_SHARED,
                                          v4l2_fd, buf.m.offset);
        if (MAP_FAILED == buf_infos[buf.index].start)
        {
    
    
            perror("mmap error");
            return -1;
        }
    }
    /* 入队*/
    for (buf.index = 0; buf.index < FRAMEBUFFER_COUNT; buf.index++)
    {
    
    
        if (0 > ioctl(v4l2_fd, VIDIOC_QBUF, &buf))
        {
    
    
            fprintf(stderr, "ioctl error: VIDIOC_QBUF: %s\n", strerror(errno));
            return -1;
        }
    }
    return 0;
}
static int v4l2_stream_on(void)
{
    
    
    /* 打开摄像头、摄像头开始采集数据*/
    enum v4l2_buf_type type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    if (0 > ioctl(v4l2_fd, VIDIOC_STREAMON, &type))
    {
    
    
        fprintf(stderr, "ioctl error: VIDIOC_STREAMON: %s\n", strerror(errno));
        return -1;
    }
    return 0;
}
static void v4l2_read_data(void)
{
    
    
    struct v4l2_buffer buf = {
    
    0};
    unsigned short *base;
    unsigned short *start;
    int min_w, min_h;
    int j;
    if (width > frm_width)
        min_w = frm_width;
    else
        min_w = width;
    if (height > frm_height)
        min_h = frm_height;
    else
        min_h = height;
    buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
    buf.memory = V4L2_MEMORY_MMAP;
    for (;;)
    {
    
    
        for (buf.index = 0; buf.index < FRAMEBUFFER_COUNT; buf.index++)
        {
    
    
            ioctl(v4l2_fd, VIDIOC_DQBUF, &buf); // 出队
            for (j = 0, base = screen_base, start = buf_infos[buf.index].start;
                 j < min_h; j++)
            {
    
    
                memcpy(base, start, min_w * 2); // RGB565 一个像素占2 个字节
                base += width;                  // LCD 显示指向下一行
                start += frm_width;             // 指向下一行数据
            }
            // 数据处理完之后、再入队、往复
            ioctl(v4l2_fd, VIDIOC_QBUF, &buf);
        }
    }
}
int main(int argc, char *argv[])
{
    
    
    if (2 != argc)
    {
    
    
        fprintf(stderr, "Usage: %s <video_dev>\n", argv[0]);
        exit(EXIT_FAILURE);
    }
    /* 初始化LCD */
    if (fb_dev_init())
        exit(EXIT_FAILURE);
    /* 初始化摄像头*/
    if (v4l2_dev_init(argv[1]))
        exit(EXIT_FAILURE);
    /* 枚举所有格式并打印摄像头支持的分辨率及帧率*/
    v4l2_enum_formats();
    v4l2_print_formats();
    /* 设置格式*/
    if (v4l2_set_format())
        exit(EXIT_FAILURE);
    /* 初始化帧缓冲:申请、内存映射、入队*/
    if (v4l2_init_buffer())
        exit(EXIT_FAILURE);
    /* 开启视频采集*/
    if (v4l2_stream_on())
        exit(EXIT_FAILURE);
    /* 读取数据:出队*/
    v4l2_read_data(); // 在函数内循环采集数据、将其显示到LCD 屏
    exit(EXIT_SUCCESS);
}

In the above example code, the image data collected by the camera will be displayed on the LCD screen of the development board. We set the pixel format of the camera to RGB565 because it is easier to process. I won’t introduce the other codes to you. There’s nothing much to say. The annotation information in the code has been described very clearly. If you talk about it in a video, you can explain it. In text form, some things are not so easy to describe!
The factory system of the development board supports the ov5640, ov7725 (without FIFO) and ov2640 cameras of Punctual Atomic. These cameras all support the RGB565 pixel format; of course, in addition, you can also use UVC USB cameras on the board. If you are around With this kind of camera, you can also test it, but this kind of USB camera usually does not support RGB565 format, but more YUYV format. The above code does not support the processing of YUYV format. You need to modify it. You have to collect the YUYV The data must be converted into RGB565 data before
the collected image can be displayed on the LCD.
Next, compile the sample code:
Insert image description here
Copy the compiled executable file to the user's home directory of the development board's Linux system:
Insert image description here
First, before testing, we must plug in a camera on our development board. We need to pay attention here. We mentioned earlier The factory system of the development board supports ov5640, ov7725 and ov2640. These three cameras cannot take effect at the same time. The default configuration of the factory system enables ov5640. If you want to use ov7725 or ov2640, you need to modify the device tree. Please refer to the specific modification.
Section 3.16 in the document "Development Board CD-ROM Information A-Basic Information/ [Punctual Atom] I.MX6U User Quick Experience V1.7.3.pdf".
Here I take the ov2640 camera as an example. The ov2640 camera has been connected to my test board, as shown below:
Insert image description here
The installation method of other cameras is also the same, with the head facing outwards. Note that it must be installed before starting, not after the development board is started. Remember!
If it is a USB camera, you can directly plug the USB camera into the USB HOST interface on the development board while the development board is running .
Then to run the test program, we need to pass in a parameter that represents the corresponding device node of the camera:
Figure 25.3.4 Execute the camera test program

After the program is run, the image collected by the camera will be displayed on the LCD screen of the development board, as shown below:
Insert image description here
Please ignore the problem of shooting with your mobile phone!
Originally, after running the program, the terminal will print out the pixel format and description information supported by the camera, as well as the collection resolution, frame rate and other information supported by the camera. However, from the printing information in Figure 25.3.4, we can see that only the pixels are printed after the program is run. The format and description information do not include information such as printing resolution and frame rate. Why? Of course, this is not a problem with our program, but the camera driver function is not perfect enough. The underlying driver does not implement these related functions. I will briefly mention it here to avoid everyone thinking that there is a problem with the program! Here I changed a USB camera and show you its printing information, as shown below:
Figure 25.3.6 USB camera print information

As you can see from the picture above, the program prints all capture resolutions and frame rates supported by the camera.
Okay, this chapter is over. So far, we have learned a lot of application programming knowledge about hardware peripherals. Everyone must learn to learn and use these things flexibly, and try to make a comprehensive and fun program. I think it is very important to improve your application programming skills through small projects. You should not follow the author's tutorials chapter by chapter. You have to stop and think, be more hands-on, and expand on the basis of the tutorials. This way you can make progress! Come on, everybody!

Video surveillance for practical small projects

Currently, common video surveillance and live video broadcasts use RTMP and RTSP streaming media transmission protocols.
RTSP (Real-Time Stream Protocol) is a text-based multimedia playback control protocol jointly proposed by Real Networks and Netscape. RTSP defines the streaming format, and the streaming data is transmitted via RTP; RTSP has very good real-time effect and is suitable for video chat, video monitoring and other directions.
RTMP (Real Time Message Protocol) was proposed by Adobe to solve the problems of multiplexing (Multiplexing) and packetizing (packetizing) of multimedia data transmission streams. Its advantages lie in low latency, high stability, and support for all camera formats. The browser can load the flash plug-in and play it directly.
The difference between RTSP and RTMP:
Although RTSP has the best real-time performance, it is complex to implement and is suitable for video chat and video surveillance; RTMP is strong in browser support, and can be
played directly after loading the flash plug-in, so it is very popular. On the contrary, in the browser Playing rtsp is very difficult.
In this chapter, we will introduce to you how to implement video monitoring or live broadcast through FFmpeg+Nginx and RTMP streaming.

Introduction to video surveillance

In this chapter we will use RTMP streaming media service to implement video surveillance. The RTMP streaming media service framework diagram is as follows:
Figure 34.1.1 Streaming media service

The push end is responsible for transmitting the video data to the RTMP streaming server through the RTMP streaming protocol. The pull end can obtain the video data from the streaming server through the RTMP protocol; and the streaming server is responsible for receiving the video data from the push end. When the client (streaming client) wants to obtain the video data, it sends it to the corresponding client.
So as can be seen from the above figure, in order to implement RTMP video monitoring, these three parts are necessary: ​​push client, pull client and streaming server. Do these need to be realized by ourselves? Of course not. For example, we can use FFmpeg to push streams, VLC player to pull streams, and the streaming media client can be built using Nginx!

Nginx porting

As mentioned before, we can use Nginx to build an RTMP streaming media server. For example, you can build a streaming media server on a public IP host. Of course, the author does not have this condition; here we choose to build it on a development board Streaming media server, and the streaming terminal is also a development board, so in the solution of this chapter, the development board is both a streaming media server and an inference terminal.
Since we want to build a streaming media server on the development board, we first need to transplant Nginx to the development board. In fact, Nginx has been transplanted into the factory system of our board, and the board will automatically start Nginx when booting into the system. That is to start the streaming media service, so the board itself is already a streaming media server after it is started. Of course, here we don’t care about the streaming media service that has been built in the factory system. Here we just explain it to you. In this chapter, we have to transplant Nginx ourselves and then build the streaming media service on the board.

Download Nginx source code

Enter a directory in the Ubuntu system and execute the following command to download the Nginx source code:

wget http://nginx.org/download/nginx-1.20.0.tar.gz

Insert image description here
What we download here is version 1.20, which is a relatively new version. After the download is completed, you will get a
compressed package file named nginx-1.20.0.tar.gz.
Figure 34.2.2 nginx-1.20.0.tar.gz compressed file

This is the source code package of Nginx.

Download nginx-rtmp-module module

In fact, native Nginx does not support RTMP. We need to install the third-party module nginx-rtmp-module plug-in to support
RTMP. Download nginx-rtmp-module through the following command.

git clone https://github.com/arut/nginx-rtmp-module.git

Figure 34.2.3 Download nginx-rtmp-module

After successful download, you will get the nginx-rtmp-module folder.

Cross compile Nginx

Unzip the downloaded nginx-1.20.0.tar.gz file:

tar -xzf nginx-1.20.0.tar.gz

Figure 34.2.4 Decompression

After decompression, generate the nginx-1.20 folder and enter this directory. Before cross-compilation, initialize the cross-compilation tool:

source /opt/fsl-imx-x11/4.1.15-2.1.0/environment-setup-cortexa7hf-neon-poky-linux-gnueabi

Please fill in the path based on your actual installation location.
First configure the source code, then execute make to compile the source code, and finally execute make install to install it! There are only these three steps in total, but in fact there will be some problems during the compilation process, we will look at it later!
The first step in configuring the source code
is to configure the source code. Before configuration, a simple modification is required, otherwise the configuration will not pass; first open the auto/cc/name file in the nginx source code directory, and assign "exit 1" at line 21 to Comment it out! As shown below:
Insert image description here
Save and exit after modification. Then open the auto/types/sizeof file, change "ngx_size=" at line 15 to "ngx_size=4", and change "$CC" at line 36 to "gcc", as shown below: Similarly, the modification is
Insert image description here
complete Then save and exit! Then execute the following command to configure:

./configure --prefix=/home/dt/tools/nginx-1.20.0/install \
--with-http_ssl_module \
--with-http_mp4_module \
--with-http_v2_module \
--without-http_upstream_zone_module \
--add-module=/home/dt/tools/nginx-rtmp-module

In the above command, –prefix specifies the installation path of nginx. For convenience, the author directly installs it into the install
directory under the nginx source code directory; –add-module is used to add third-party modules, such as the nginx-rtmp- we downloaded earlier. module, so –add-module needs to point to the nginx-rtmp-module source code path. Please fill it in according to your actual path.
As shown below:
Insert image description here
The configuration is successful and the printed information is as follows:
Insert image description here
Compile source code
After the configuration is completed, we then execute make compilation:
Insert image description here
This compilation will not be successful, and the following error printing information will appear:
Figure 34.2.10 Compilation error

At this time we need to modify the objs/ngx_auto_config.h file in the nginx source code directory and add the following content to the header file:

#ifndef NGX_HAVE_SYSVSHM
#define NGX_HAVE_SYSVSHM 1
#endif

Insert image description here
After the addition is completed, save and exit, execute make again to compile, and the compilation will be successful.
Figure 34.2.12 Compilation successful

Installation
After the compilation is successful, we then install it. Execute make install.
Insert image description here
The author installed nginx into the install directory under the nginx-1.20.0 directory, and entered the install directory:
Figure 34.2.14 Folders under the installation directory

There are many configuration files in the conf directory, as shown below:
Figure 34.2.15 Configuration files in the conf directory

The nginx.conf configuration file is very important. We will configure the configuration file accordingly later.
There is an executable program nginx in the sbin directory:
Insert image description here
This executable file is a "not stripped" file, which means that the file contains a lot of debugging information, so the file is particularly large 7.5MB. You can execute the following command to remove the debugging information. , to reduce the file size:

arm-poky-linux-gnueabi-strip --strip-debug nginx

Figure 34.2.17 Remove debugging information

The nginx executable program is used to start the streaming service.
Now we need to copy these files in the installation directory to the development board Linux system. Before copying, we need to remove the nginx that has been transplanted in the development board factory system, enter the development board Linux system, and execute the following These commands remove the nginx program and corresponding configuration files that come with the factory system:

rm -rf /usr/sbin/nginx
rm -rf /etc/nginx/*

Figure 34.2.18 Delete the original nginx of the factory system

Next, we copy nginx in sbin under the nginx installation directory to the /home/root directory of the development board Linux system, as shown below:
Figure 34.2.19 Copy nginx to the development board

Then copy the conf, logs, and html folders in the installation directory to the /etc/nginx directory of the development board Linux system, as shown below:
Insert image description here

Test nginx

In the previous section, we have transplanted nginx to the development board. In this section, we will test and verify to see if nginx can work normally. First restart the development board. After restarting the system, enter the /home/root directory and execute the nginx program.

./nginx -V # 查看版本信息

Figure 34.3.1 View nginx version information

Execute ./nginx -h to view help information:
Figure 34.3.2 View help information

Next we need to start nginx and execute the following command:

./nginx -p /etc/nginx

At this time, the nginx service is running in the background, and you can view it through the ps command:

ps -aux

Insert image description here
At this point we can open the computer browser and enter the IP address of the development board, as shown below:
Insert image description here
Press Enter, as shown below:
Insert image description here
If the above page is displayed, it means that our nginx is working normally.

Configure nginx

In the future, we will use FFmpeg to push the video stream to the nginx streaming media server through RTMP. Before that, we need to configure nginx, open the nginx configuration file /etc/nginx/conf/nginx.conf, and add the following content:

rtmp
{
    
    
    server
    {
    
    
        listen 1935;
#监听1935 端口
            chunk_size 4096;
        application live
        {
    
    
            allow publish 127.0.0.1;
            allow play all;
            live on;
#打开直播 record off;
#关闭record meta copy;
        }
        application hls
        {
    
    
            live on;
            hls on;
            hls_path / tmp / hls;
            hls_fragment 8s;
        }
    }
}

As shown below:
Insert image description here
After adding, save and exit!
Then execute the following command to restart nginx:

./nginx -p /etc/nginx -s reload

Use FFmpeg to push streams

After nginx restarts, we can then use FFmpeg to push the stream, push the video stream data to the nginx streaming media server through RTMP, and execute the following command to push the stream:

ffmpeg -re -i /run/media/mmcblk0p1/testVideo.mp4 -c:av copy -f flv rtmp://127.0.0.1/live/mytest

Let’s briefly introduce these parameters. First, -i means input video data, here we use an mp4 video file;
rtmp://127.0.0.1/live/mytest means pushing the video stream to the streaming media server through RTMP, because we The server and streaming terminal are both development boards, so the IP address 127.0.0.1 refers to the local streaming media server.
Insert image description here
Now we can pull the stream. We can use our Windows host as the streaming terminal and use the VLC software to pull the stream. You can download and install the VLC software yourself. After installation, open VLC, as shown below:
Figure 34.5.2 VLC software

Click "Media" in the upper left corner -> "Open Network Streaming":
Insert image description here
Enter the IP address and path of the streaming media server. The IP address corresponding to the development board I used is 192.168.1.114. Click "Play" to start the RTMP streaming media server. Pull the video data and play it, as shown below:
Insert image description here
there is both picture and sound!
Next, we use the camera for testing, use FFmpeg to collect the camera video data and send it to the nginx streaming media server, execute the following command:

ffmpeg -f v4l2 -video_size 320x240 -framerate 15 -i /dev/video2 -q 10 -f flv rtmp://127.0.0.1/live/mytest

Insert image description here
Here the author used a USB camera for testing. After the command was executed, VLC was used under Windows to stream the images collected by the camera to implement video camera monitoring.
Figure 34.5.6 VLC streams and plays the images collected by the camera

The test found that the delay was too high, causing the picture currently collected by the development board to be out of sync with the picture played by VLC. The author actually measured a delay of about 5 or 6 seconds. Why is this? The author thinks there are two reasons:
⚫ The performance of our I.MX6U development board is too weak. Although we execute one command, FFmpeg performs a lot of internal processing, such as the processing of video and audio data. Due to I. MX6U does not have hardware video decoding and relies entirely on software processing, which takes a lot of time.
⚫ In our experiment, since the server and push end are both development boards, their performance is already weak, and so much work has been done, which will inevitably lead to large delays. You can use a high-performance development board. After testing and comparing, you will find that the performance of the development board we use is indeed too weak!
Well, the content of this chapter is over.

Guess you like

Origin blog.csdn.net/zhuguanlin121/article/details/132574795