The principle and implementation of car surround view algorithm

0. Background

       Car surround view is a common assisted driving technology. Fisheye cameras are installed on the front of the vehicle, parking spaces, and left and right rearview mirrors and calculated by algorithms, which can provide drivers with rich perspectives to observe the surrounding environment, thereby improving driving safety. On the other hand, due to the popularity and continuous penetration of automatic/assisted driving technology, surround view is also an important data source and implementation basis for higher-level driving algorithms, such as moving object detection and A-autonomous parking. The picture below is a display effect of a common look-around algorithm found on the Internet.

1. Function

       Here, the car surround view functions are divided into three categories: main functions, auxiliary functions, and advanced functions.

       The first is the main functions, mainly various single views, 2D bird's eye views, 3D surround views, and reversing auxiliary lines.

       The second is auxiliary functions, mainly 2D/3D car model display, radar reminder display, and vehicle status display (door switch, turn signal, etc.) in addition to the main functions.

       Finally, there are advanced functions, including color consistency (reducing the brightness/color inconsistency of the surround-view camera input image and causing the "color difference" in the stitched image), car body transparency (filling the bottom area with image history information), dynamic calibration (realized during driving Surround calibration), etc.

       The first two types of functions are very common and mature in many models; the last advanced function is the embodiment of product differentiation and can improve the user experience.

2. Technical solution

       The hardware is mainly the selection of the surround view camera (usually a fisheye camera) and the computing platform and the access of data. Here we mainly explain the software-related solutions. The following figure is a simplified system framework.

        The core operation of the surround view algorithm is to splicing images from 4 cameras to obtain a bird's-eye view or a 3D surround view. Before splicing, it is necessary to obtain the parameters of each camera through calibration. Camera parameters include internal parameters and external parameters. The former includes optical center coordinates, focal length, and distortion parameters. The latter is used to describe the transformation relationship between the camera coordinate system and the vehicle coordinate system. Front-mounted surround-view calibration is generally completed on the production line.

       The parameters obtained by the above calibration have provided the calculation of the mapping relationship between the image pixel coordinate system-camera coordinate system-car coordinate system. The next thing to do is image interpolation and splicing, which is technically the grid interpolation operation of the image. It can be realized efficiently by using CPU through SIMD, multi-threading technology or directly using GPU. The implementation process of 2D bird's-eye view and 3D surround view is roughly the same, but there are differences in the establishment of spliced ​​mesh models. In addition, due to the characteristics of OpenGLES, it is especially suitable for the application of 3D surround view stitching. The calibration of the camera and the transformation of the image have been mentioned in my previous blogs, which can be used as a reference.

       The operation of single view is mainly to crop or correct the distortion of the original input image, which is still an image grid interpolation operation in essence; the display of 2D car model is the mixed superposition of image data, and the display of 3D car model is generally rendered by OpenGLES.

       Use other signals connected to the vehicle, such as the steering wheel angle combined with vehicle wheelbase and other parameters to calculate the lane trajectory; use radar signals to control the display of radar signs; use vehicle wheel speed combined with bird's-eye view stitching to achieve car body transparency and other functions.

3. Function realization

       At present, a relatively complete car surround view algorithm has been realized. The functions include surround view calibration, 2D bird's eye view, 3D surround view, single view, radar marking, dynamic driving auxiliary line, transparent car body, consistent color, etc. The specific effect is shown in the figure below. The algorithm uses the platform GPU to render through OpenGLES, which can achieve a running speed of 1080P@25FPS.

Guess you like

Origin blog.csdn.net/lwx309025167/article/details/120096776