Real-time lane segmentation & lane keeping system based on OpenCV (source code & tutorial)

1. Research Background

The active safety system of automobiles can realize the active prevention and avoidance of risks, and it can effectively alleviate the current plight of frequent automobile traffic accidents in our country. Therefore, related research on it has received strong support from the state.
Lane Keeping Assistance System (LKAS, Lane Keeping Assistance Systems), as a kind of ADAS, can effectively avoid traffic accidents caused by vehicles deviating from the normal driving lane. According to relevant experts, the accidents caused by lane departure account for about 50% of the total number of global automobile traffic accidents, so the relevant research on LKAS should also be focused on by the industry and academia. At the same time, using the camera as the main environmental perception sensor of LKAS will effectively reduce the system's R&D and production costs, thereby rapidly increasing the system's penetration rate and assembly rate. Based on the above reasons, it is of great practical significance to study the machine vision-based lane keeping assistance system with camera as the main sensor.

2. Picture demonstration

2.png

3.png

4.png

3. Video demonstration

Real-time lane segmentation & lane keeping system based on OpenCV (source code & tutorial)_哔哩哔哩_bilibili

4. Algorithm flow chart

Referring to the algorithm flow proposed by this blog, with a little improvement, in the machine vision-based lane keeping assistance system, the system will extract accurate road image information through the lane line recognition and tracking module, and then decipher the road space information and The pose information of the vehicle relative to the lane is used for subsequent modules. In this chapter, this paper will design an efficient and stable lane recognition and tracking algorithm based on the lane recognition process shown in the figure.
image.png
(1) Image preprocessing: the original road image is processed to obtain a lane line feature map with less noise.
(2) Lane line feature point extraction: extract the lane line feature points from the lane line feature map.
(3) Lane line fitting: select an appropriate mathematical model, and fit out the lane line according to the feature points of the lane line.
(4) Lane line tracking: select appropriate tracking objects and design corresponding tracking strategies to improve the stability and accuracy of lane line recognition.

5. Image preprocessing

In the original road image, there are a lot of useless information and interference points. Preprocessing the image can effectively reduce the amount of data that the algorithm needs to process, and at the same time filter out most of the noise in the image, thereby reducing the overall development difficulty of the lane line recognition algorithm and improving the overall operating efficiency of the algorithm.
Image feature extraction is a necessary part of image preprocessing, which will determine the basic design idea of ​​the entire lane line recognition algorithm, so image preprocessing must be carried out closely around it. By comparing and analyzing the advantages and disadvantages of the existing feature extraction methods, this paper finally selects the linear filter based on the lane line width feature for image feature extraction, and designs a set of image preprocessing methods around it. The workflow of this method is shown in the figure. The image preprocessing methods involved are image grayscale, image feature extraction, image binarization, inverse perspective transformation and region of interest acquisition.
image.png

6. Lane line edge detection

Lane edge detection is the most widely used image feature extraction method in the field of lane recognition. In grayscale images, pixels with significant changes in grayscale value are called edge pixels. Usually, the pixel points at the junction of the lane line and the road will also have a sudden change in gray value, that is, there are edge pixels belonging to the lane line at the boundary of the lane line. Therefore, the algorithm can extract the edge pixels belonging to the lane line in the image through edge detection, and then realize the recognition and positioning of the lane line based on them.
In the field of image processing, the Canny edge detection algorithm is often used for edge detection. As shown in the figure, after edge detection, the original image will be transformed into an edge image as shown in figure (b). Comparing Figures (a) and (b), we can see that in the edge image, only the edge pixels of the object are preserved.
image.png

However, edge detection will cause the image to lose a lot of important lane line information, such as the geometric information and gray value information of the lane line. This leads to a generally high false detection rate of lane line recognition algorithms developed based on edge detection. As shown in the figure, it shows the false detection of lane lines in some common environments by this type of algorithm. It can be seen from the figure that when the edge of the non-lane line and the edge of the lane line are parallel to each other, due to the lack of other information, the algorithm will not be able to effectively distinguish the two, so the algorithm will likely have false detections.
image.png
image.png
In addition, it is difficult to realize the recognition of curved lane lines only based on the edge pixels of the lane lines. At the same time, the existing edge detection algorithms generally take a long time to detect and have poor real-time performance. Therefore, this paper finally decided not to use edge detection for image processing. feature extraction.

7. Linear filtering based on lane width features

In the real road image, since the gray value of the lane line pixels is similar, and the gray value of the non-lane line pixel points on the road is also similar, and the gray value of the former is generally greater than the latter, so the real road image After linear filtering, the "light-dark" distribution effect as shown in the figure will also be obtained.
As shown in the figure, it shows the effect of linear filtering on real road images. It can be seen from Figure (b) that after filtering, the pixels of lane lines and the pixels of non-lane lines on the road surface are clearly distinguished at the visual level, and information such as the width, length, and general shape of lane lines is also preserved.
image.png
In addition, the linear filtering based on the lane line width feature can eliminate the interference caused by the following interference sources to the lane line recognition algorithm.
(1) Zebra crossings and other white marking lines whose width exceeds the lane line.
(2) Black brake marks or water marks on the road surface.
(3) Roadside steps.
As shown in the figure, the effect of linear filtering of the road image containing the above-mentioned interference sources is shown. For the convenience of observation, this paper also performs binarization on the filtered image. Obviously, after filtering, most of the pixels of the aforementioned interference sources become "dark", that is, they are filtered out.
image.png

8. Inverse perspective transformation

Camera imaging is a perspective process, which makes lane lines parallel to each other on the ground intersect at a point in the image. The purpose of inverse perspective transformation is to remove the above-mentioned perspective effect, so that the intersecting lane lines in the image return to the state of being parallel to each other. When the internal and external parameters of the camera are known, the algorithm proposed by this blog can perform inverse perspective transformation on the original image and obtain an inverse perspective image. The process of inverse perspective transformation is as follows:
image.png
The figure shows the effect of the binarized feature map in this paper after inverse perspective transformation. Obviously, after inverse perspective transformation, the non-parallel lane lines in the figure return to the state of being parallel to each other as shown in the figure. In the following, for the sake of convenience, this paper refers to the binarized feature map after inverse perspective transformation as "inverse perspective feature map".
image.png

9. Lane line fitting

In the lane keeping assist system based on machine vision, the output of the lane line recognition and tracking module is the mathematical model of the lane line determined by the parameters, and the lane departure warning module and the lane keeping control module will obtain the corresponding road information according to the model , so the mathematical model of lane marking with reasonable model and accurate parameters is the key to ensure the high-performance operation of lane marking recognition and tracking algorithms. Based on this, the work of the lane line fitting stage is to select an appropriate lane line mathematical model, and use the searched lane line feature points to determine the parameters of the model.
When fitting the lane line model, the commonly used methods are the random sampling consensus algorithm and the least squares algorithm. The random sampling consensus algorithm is not easily affected by noise points, but it requires iterative calculations, a large number of cycles, and slow processing speed. The processing speed of the least squares algorithm is faster, but it is easily disturbed by noise points. In this paper, there will be no noise points or only a few noise points in the extracted lane line feature points, so using the least squares algorithm can not only obtain high-precision model parameters, but also ensure the computational efficiency of the algorithm. Therefore, this paper decides to use the least squares algorithm to fit the lane line model.
For a mathematical model with undetermined parameters, the least squares algorithm finds the best model parameters by minimizing the sum of squared errors between the model and known data points.
image.png

Solve the linear equation system to obtain the definite values ​​of parameters a, b and c, thereby completing the fitting of the quadratic curve model C:x = ay'+by+c.
As shown in the figure, it is the fitting result of the algorithm for different types of lane lines. It can be seen from the figure that the quadratic curve model has been able to fit the lane line very well.
image.png

10. Lane line tracking

Generally, for the feature maps of lane lines in consecutive adjacent frames, the starting point, overall position, and general shape of lane lines in the image change slightly. Based on the above facts, selecting a suitable tracking object in the lane line recognition algorithm and designing a reasonable tracking strategy will help to improve the recognition accuracy and success rate of the algorithm, while reducing the false detection rate and missed detection rate of the algorithm.
The lane line tracking strategy adopted in this paper is as follows:
(1) Use the lane line starting point of the k-1th frame feature map to guide the detection of the lane line starting point of the kth frame feature map.
(2) Use the lane line parameter model of the feature map of the k-1th frame to guide the extraction of the lane line feature points of the feature map of the kth frame.
(3) If the lane line cannot be successfully identified from the feature map of the kth frame, proceed as follows.
a. If the lane line parameter model of the k-1th frame exists, and it is not inherited from the k-2th frame, make the kth frame inherit the lane line parameter model of the k-1th frame. Wherein, "inheriting" refers to directly making the lane line detection result of a certain frame image the same as the lane line detection result of another frame.
b. If the lane line parameter model of frame k-1 exists and is inherited from frame k-2, frame k does not inherit the lane line parameter model of frame k-1.
c. If the lane line parameter model of frame k-1 does not exist, then the lane line parameter model of frame k does not exist either.

11. Core code implementation

# Canny检测
def do_canny(frame):
	# 将每一帧转化为灰度图像,去除多余信息
	gray = cv.cvtColor(frame,cv.COLOR_BGR2GRAY)
	# 高斯滤波器,去除噪声,平滑图像
	blur = cv.GaussianBlur(gray,(5,5),0)
	# 边缘检测
	# minVal = 50
	# maxVal = 150
	canny = cv.Canny(blur,50,150)

	return canny

# 图像分割,去除多余线条信息
def do_segment(frame):
	# 获取图像高度(注意CV的坐标系,正方形左上为0点,→和↓分别为x,y正方向)
	height = frame.shape[0]

	# 创建一个三角形的区域,指定三点
	polygons = np.array([
		[(0,height), 
		 (800,height),
		 (380,290)]
		])

	# 创建一个mask,形状与frame相同,全为0值
	mask = np.zeros_like(frame)

	# 对该mask进行填充,做一个掩码
	# 三角形区域为1
	# 其余为0
	cv.fillPoly(mask,polygons,255) 

	# 将frame与mask做与,抠取需要区域
	segment = cv.bitwise_and(frame,mask) 

	return segment

# 车道左右边界标定
def calculate_lines(frame,lines):
	# 建立两个空列表,用于存储左右车道边界坐标
	left = []
	right = []

	# 循环遍历lines
	for line in lines:
		# 将线段信息从二维转化能到一维
		x1,y1,x2,y2 = line.reshape(4)

		# 将一个线性多项式拟合到x和y坐标上,并返回一个描述斜率和y轴截距的系数向量
		parameters = np.polyfit((x1,x2), (y1,y2), 1)
		slope = parameters[0] #斜率 
		y_intercept = parameters[1] #截距

		# 通过斜率大小,可以判断是左边界还是右边界
		# 很明显左边界slope<0(注意cv坐标系不同的)
		# 右边界slope>0
		if slope < 0:
			left.append((slope,y_intercept))
		else:
			right.append((slope,y_intercept))

	# 将所有左边界和右边界做平均,得到一条直线的斜率和截距
	left_avg = np.average(left,axis=0)
	right_avg = np.average(right,axis=0)
	# 将这个截距和斜率值转换为x1,y1,x2,y2
	left_line = calculate_coordinate(frame,parameters=left_avg)
	right_line = calculate_coordinate(frame, parameters=right_avg)

	return np.array([left_line,right_line])

# 将截距与斜率转换为cv空间坐标
def calculate_coordinate(frame,parameters):
	# 获取斜率与截距
	slope, y_intercept = parameters

	# 设置初始y坐标为自顶向下(框架底部)的高度
	# 将最终的y坐标设置为框架底部上方150
	y1 = frame.shape[0]
	y2 = int(y1-150)
	# 根据y1=kx1+b,y2=kx2+b求取x1,x2
	x1 = int((y1-y_intercept)/slope)
	x2 = int((y2-y_intercept)/slope)
	return np.array([x1,y1,x2,y2])

# 可视化车道线
def visualize_lines(frame,lines):
	lines_visualize = np.zeros_like(frame)
	# 检测lines是否为空
	if lines is not None:
		for x1,y1,x2,y2 in lines:
			# 画线
			cv.line(lines_visualize,(x1,y1),(x2,y2),(0,0,255),5)
	return lines_visualize

......

12. System integration

The complete source code & environment deployment video tutorial & custom UI interface in the picture below :
1.png
refer to the blog "Real-time Lane Segmentation & Lane Keeping System Based on OpenCV (Source Code & Tutorial)"

13. References


[1] Zhu Sicong , Zhou Delong . A Review of Corner Detection Technology [J]. Computer System Application . 2020, (1). DOI: 10.15888/j.cnki.csa.007237 .

[2] Hu Yanping , Tang Kouzhu , Wang Naihan . Research on Lane Keeping System Based on Particle Swarm Optimization Neural Network PID Control [J]. Beijing Automotive . 2018, (4). DOI: 10.14175/j.issn.1002-4581.2018. 04.010 .

[3] Guo Keyou , Wang Yiwei , Guo Xiaoli . Lane Classification Detection Algorithm Combining LDA and LSD [J]. Computer Engineering and Applications . 2017, (24). DOI: 10.3778/j.issn.1002-8331.1606-0116 .

[4] Chen Wuwei , Tan Dongkui , Wang Hongbo , et al. A class of driver direction control model based on trajectory prediction [J]. Chinese Journal of Mechanical Engineering . 2016, (14). DOI: 10.3901/JME.2016.14.106 .

[5] Cao Qingqing , Ding Zhizhong . Modeling and simulation of vehicle AFS system based on two degrees of freedom [J]. Journal of Electronic Measurement and Instrumentation . 2014, (4). DOI: 10.13382/j.jemi.2014.04.015 .

[6] Xu Jin , Zhao Youqun , Ruan Miqing . Neural Network Driver Model Based on Vehicle Handling Dynamics [J]. Journal of Dynamics and Control. 2008, (4). DOI: 10.3969/j.issn.1672-6553.2008.04.018 .

[7] Wu Ming , Wu Junlong , Ma Shuai , et al. Checkerboard corner detection based on corner gray distribution characteristics [J]. Laser and Optoelectronics Progress . 2020, (1). DOI: 10.3788/LOP57.011204 .

[8] Wang Peiran , Chang Lianyu . Lane keeping control model based on improved driver model [J]. Chinese Journal of Safety Science . 2018, (7). DOI: 10.16265/j.cnki.issn1003-3033.2018.07.003 .

[9] Luo Lihua . Steering Control Strategy of Lane Keeping System Based on MPC [J]. Journal of Shanghai Jiao Tong University . 2014, (7).

[10] Li Haifeng , Liu Jingtai . An optimized vanishing point estimation method and error analysis [J]. Acta Automatica Sinica . 2012, (2). DOI: 10.3724/SP.J.1004.2012.00213 .

Guess you like

Origin blog.csdn.net/qunmasj/article/details/128602966