ADAS camera image stitching algorithm surveying

ADAS camera image stitching algorithm surveying

Input and output interfaces

Input:

(1) four cameras capture video image resolution (int int)

(2) four cameras capture video image format (RGB, YUV, MP4, etc.)

(3) Camera calibration parameters (the center position (x, y) and five distortion

Coefficient (2 radially, tangentially 2, the rib 1), floating point type float)

(4) the camera initialization parameters (camera position and initial three coordinate directions

Rotation angle, vehicle speed, etc. width height, float float)

Output:

(1) image / video image fusion splicing and the coordinate position (floating point type float)

(2) and image fusion spliced ​​video resolution images (int int)

(3) joined image and image fusion video format (RGB, YUV, MP4, etc.)

(4) surrounding the vehicle obstacle warning (character char)

22.1  Function Definition

1) the position coordinates of image / video image fusion and computing stitching.

2) is calculated and image fusion spliced ​​video image resolution.

3) determining an image stitching and image fusion video format.

4) detects an obstacle around the vehicle and the police.

22.2  technology roadmap plan

360 ° panoramic view of the parking assist system, installed in the vehicle front, rear, left wide-angle camera, and right four directions around the vehicle the video images acquired by the image synthesizing a fusion splicing technology and panoramic view around the body, Finally, in the center console display screen to expand the driver's field of vision. By 360 ° panoramic view of the parking assistance system, a driver sitting in a car to visually see whether a vehicle is present around the relative orientation and distance to the obstacle and the obstacle, in order to calm handling vehicle in a narrow parking into the parking lot congestion bit or through a complex road, which can effectively prevent accidents virtually scratch, collision, fall and so on. At the same time, the panorama view can be recognized as an automatic driving system, the detection, tracking algorithm provides support.

Fujitsu (Fujitsu) video imaging technology developed, can achieve real complete 360 ​​° view surrounding the vehicle. Four camera is mounted about the periphery of the front and rear of the vehicle, the video image of the vehicle surroundings synthesized by Fujitsu 3D virtual projection / viewpoint conversion technology. Advanced three-dimensional algorithms can be combined more smoothly four independent camera image to provide a seamless and clear view of 360 °. Specifically, four camera images are transmitted to the video processing of the LSI, and includes a 3D video capture function, then the camera images into a single 3D image in real time and projected into the bowl-shaped three-dimensional grid, generating a virtual 3D surround video, which can be changed Perspective view of the vehicle surroundings.

 

 

 

1. FIG surveying program flowchart Fujitsu

In order to meet the real needs of video splicing, taking into account the position of the camera installed mutual position between the angles and the different cameras relatively fixed, a multi-view video splicing method based on combination of the specific image mosaic with the look-up table wears in this project. In the initialization phase, previously acquired first placed on the vehicle front, rear, with a checkerboard calibration image left and right four directions, the calibration image by using the four cameras are parameter calibration computed and stored for each camera image distortion correction parameter calibration for image distortion correction, elimination of a camera image distortion; then the calibration image after distortion correction performed projective transformation, computed and stored projective transformation parameter; then collected previously placed in the vehicle front, rear, left, and right four orientation specific image with a rich feature points, and the distortion is corrected by the lookup camera image distortion correction parameter, by looking projective transformation parameters specific image after correction into a bird's; last four bird's extraction ORB (Oriented FAST and Rotated BRIEF) and coarse matching characteristics, using RANSAC (random sample Consensus, uniform random sampling) algorithm eliminate false matching points, and fitting the initial value of a homography matrix, and then using the nonlinear Levenberg-Marquardt iterative least Approximation for refinement, after image registration, fusion splicing and generates 36 0 ° panoramic view of the view. During the parking assistance system is enabled, the saved by looking camera image distortion correction parameter, and projective transformation parameters homography parameters, the video images of four cameras is spliced ​​to generate a virtual view of a panoramic view.

Since the internal and external camera calibration parameters of a large influence on the accuracy of the projected image effect; requires a combination of a camera mounted in the particular case where the adjustment algorithm; to meet the needs of real-time embedded systems need continuous optimization algorithm; as far as possible to simplify the process or process automation.

 

 

 

Figure 2. The algorithm flow

 

 

 

3. FIG imaging camera coordinate system and

The main mathematical principles, a point in the world coordinate system . Projected onto the image plane pixel (u, v) the process to go through homogeneous coordinate transformation:

 

 

Wherein, s is an arbitrary non-zero scale factor; αu = f / dx, f is the focal length of the camera, dx x axis represents the direction of the width of one pixel is an image in the u-axis scale factor, called return or u axis of a focal length;

 

 Wherein dy y-axis represents a direction of the height of the pixel, an image scale factor [alpha] v v in the shaft, otherwise known as v normalized focal axis; they are the camera coordinate system relative to the world coordinate system of the rotation vector; a camera coordinate system relative to the translational vector in the world coordinate system; Ml made only with the camera internal parameters relevant decisions, called camera internal parameter matrix; M2 by the camera with respect to the orientation of the world coordinate system is determined, referred to as camera external parameter matrix; M is a 3 × 4 matrix, called the projection matrix is used to calculate the conversion from the world coordinate system to the image coordinate system. Seen, if the internal and external camera parameters are known, it is possible to know the projection matrix M, for any point, if it is known spatial coordinates space , it is possible to obtain the pixel coordinates (u, v) corresponds.

 

 

 Zhang Zhengyou calibration method, using a grid as a calibrator planar target, the world coordinate system can be configured in the Zw = 0 plane.

 

 

 Wherein, and are radial distortion coefficient. The formula (3) into matrix form as

 

 

 These are the distortion correction formula.

 

 These are the projective transformation.

 

 

 These are the distortion correction and projective transformation of pixel coordinates corresponding to formula.

 

 

 The above is a homography matrix image mosaic solution formula.

 

 

 These are the average method fusion splicing overlap region.

 

 

 These are the 3-D model ship perspective transformation matrix.

 

 

  

4. alpha fusion FIG.

22.3  The key technical parameters and performance indicators

Looking around one kind of effect evaluation system and the automatic fine tuning splicing adaptive feature points is based, characterized by: comprising the steps of:

Step A, the image obtained before the splicing, and the position of the clipped image is calculated according to the same parameters captured splicing;

Step B, in an adaptive algorithm of feature points to obtain the same rotational position before splicing translation matrix image;

Procedure C, the rotation or translation matrix be angle information and the displacement information calculated evaluation index surveying system as mosaic effect, and use this matrix to fine tune the mosaic effect.

Guess you like

Origin www.cnblogs.com/wujianming-110117/p/12481909.html