Lane line detection|Using the principle of edge detection to identify lane line pictures

foreword

那么这里博主先安利一些干货满满的专栏了!

这两个都是博主在学习Linux操作系统过程中的记录,希望对大家的学习有帮助!

操作系统Operating Syshttps://blog.csdn.net/yu_cblog/category_12165502.html?spm=1001.2014.3001.5482Linux Sys https://blog.csdn.net/yu_cblog/category_11786077.html?spm=1001.2014.3001.5482 These two are columns for bloggers to learn data structures and simulate various containers in the STL standard template library.

STL source code analysis https://blog.csdn.net/yu_cblog/category_11983210.html?spm=1001.2014.3001.5482 hand-torn data structure https://blog.csdn.net/yu_cblog/category_11490888.html


1. Summary 

In this experiment, by learning the principle of edge detection and using the Matlab programming language, the edge detection image of the output given image and the lane line recognition are completed.

2. Experiment content and purpose

Output the edge detection image of the given image and complete the lane line recognition.

3. Description of relevant experimental principles

Edge detection is an image processing technique used to detect the edges or contours of objects or scenes in an image. Its principle is to identify edge positions by analyzing brightness or color changes in the image. Commonly used edge detection methods include Sobel, Prewitt, Canny, etc.

The following is the main process of this experiment:

Principle of main steps 

Gaussian filtering:

Gaussian filtering is a common image processing technique used in applications such as smoothing images, removing noise, and edge detection. It is based on the concept of a Gaussian function (also known as a normal distribution function), which achieves a smoothing effect by taking a weighted average of the pixels in the image.

The formula of Gaussian filtering is as follows:
G(x, y) = \frac{1}{2\pi\sigma^2} \cdot e^{-\frac{x^2 + y^2}{2\sigma^2}}

Among them, G ( x, y) represents the weight value of the Gaussian filter at coordinates (x, y), and σ represents the standard deviation of the Gaussian function, which controls the blurring degree of the filter. The larger the standard deviation, the more blurry the filter.

The principle of Gaussian filtering is to use the weight value of the Gaussian function to weight the pixels in the image. The weight of the center pixel of the filter is the largest, and the weight of pixels farther away from the center pixel is smaller. By weighting the average of adjacent pixels, the noise in the image can be averaged, and the overall structure and edge features of the image can be preserved.

The Sobel operator calculates the gradient:

Sobel operator in the horizontal direction:
G_x = \begin{bmatrix} -1 & 0 & 1 \\ -2 & 0 & 2 \\ -1 & 0 & 1 \end{bmatrix}\displaystyle

Sobel operator in the vertical direction:
G_y = \begin{bmatrix} -1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & 2 & 1 \end{bmatrix}

These templates are 3x3 matrices corresponding to the differentiation operations in the horizontal and vertical directions, respectively. By convolving these templates with the image, the gradient values ​​of the image in the horizontal and vertical directions can be obtained.

Non-maximum suppression:

Non-maximum value suppression can be used to find the local maximum value of pixels, set the gray value corresponding to the non-maximum value to 0, and set the maximum value point to 1, so that a large number of non-edge pixels can be eliminated, thus obtaining A pair of binary images, the edges are ideally single-pixel edges.
\begin{array}{l} g_{u p}(i, j)=(1-t) \cdot g_{x y}(i, j+1)+t \cdot g_{x y}(i-1, j+1)\\ g_{\text {down }}(i, j)=(1-t) \cdot g_{x y}(i, j-1)+t \cdot g_{x y}(i+1, j-1) \end{array}

4. Experimental process and results

% 读取图片
img = imread('图像路径');
imshow(img);
% 将图像转换为灰度
gray_img = rgb2gray(img);
% 高斯滤波
sigma = 2; % 高斯滤波器的标准差
gaussian_filtered_img = imgaussfilt(gray_img, sigma);
% 使用Sobel算子进行梯度计算
sobel_filtered_img = double(edge(gaussian_filtered_img, 'sobel'));
% 显示原始图像和处理后的图像
imshow(img); title('原始图像');
imshow(sobel_filtered_img); title('梯度计算后的图像');
% 定义梯度方向
directions = [-pi/2, -pi/4, 0, pi/4, pi/2, 3*pi/4, pi, -3*pi/4];

% 对每个像素,找到沿着梯度方向的两个相邻像素,计算它们的插值
nms_img = zeros(size(sobel_filtered_img));
for i=2:size(sobel_filtered_img,1)-1
    for j=2:size(sobel_filtered_img,2)-1
        % 找到最近的两个方向
        [~, index] = min(abs(directions - atan2d(-sobel_filtered_img(i,j), sobel_filtered_img(i,j+1))));
        if index == 1 || index == 5
            left = sobel_filtered_img(i-1,j-1);
            right = sobel_filtered_img(i+1,j+1);
        elseif index == 2 || index == 6
            left = sobel_filtered_img(i-1,j+1);
            right = sobel_filtered_img(i+1,j-1);
        elseif index == 3 || index == 7
            left = sobel_filtered_img(i,j-1);
            right = sobel_filtered_img(i,j+1);
        elseif index == 4 || index == 8
            left = sobel_filtered_img(i+1,j-1);
            right = sobel_filtered_img(i-1,j+1);
        end
        
        % 如果像素值是局部最大值,则保留
        if sobel_filtered_img(i,j) >= left && sobel_filtered_img(i,j) >= right
            nms_img(i,j) = sobel_filtered_img(i,j);
        end
    end
end

% 显示非极大值抑制后的图像
figure; imshow(nms_img); title('非极大值抑制后的图像');
% 阈值滞后处理
low_threshold = 0.05;
high_threshold = 0.2;
edge_map = zeros(size(nms_img));
edge_map(nms_img > high_threshold) = 1;
for i=2:size(nms_img,1)-1
    for j=2:size(nms_img,2)-1
        if (nms_img(i,j) > low_threshold) && (edge_map(i,j) == 0)
            if (edge_map(i-1,j-1) == 1) || (edge_map(i-1,j) == 1) || (edge_map(i-1,j+1) == 1) || (edge_map(i,j-1) == 1) || (edge_map(i,j+1) == 1) || (edge_map(i+1,j-1) == 1) || (edge_map(i+1,j) == 1) || (edge_map(i+1,j+1) == 1)
                edge_map(i,j) = 1;
            end
        end
    end
end
imshow(edge_map);

% 孤立弱边缘抑制
isolated_threshold = 1;
for i=2:size(edge_map,1)-1
    for j=2:size(edge_map,2)-1
        if (edge_map(i,j) == 1) && (sum(sum(edge_map(i-1:i+1,j-1:j+1))) <= isolated_threshold)
            edge_map(i,j) = 0;
        end
    end
end
imshow(edge_map);
% 车道线检测
% [m,n] = size(edge_map)
% x = [1600,0,0,1600];
% y = [1000,0,0,1000];
% mask = poly2mask(x,y,m,n);
% new_img = mask.*edge_map;
% imshow(new_img);

new_img = edge_map;
lines = HoughStraightRecognize(new_img);

hold on;
imshow(img);
for k = 1:length(lines)
% k = 7;
xy = [lines(k).point1; lines(k).point2];
plot(xy(:,1),xy(:,2),'LineWidth',2,'Color','green');
plot(xy(1,1),xy(1,2),'x','LineWidth',2,'Color','yellow');
plot(xy(2,1),xy(2,2),'x','LineWidth',2,'Color','red');
end

function lines = HoughStraightRecognize(BW)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%该函数为霍夫变换识别直线的函数
%input:图像(可以是二值图,也可以是灰度图)
%output:直线的struct结构,其结构组成为线段的两个端点
%以及在极坐标系下的坐标【rho,theta】
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
    [H,T,R] = hough(BW);
    % imshow(H,[],'XData',T,'YData',R,...
    %             'InitialMagnification','fit');
    % xlabel('\theta'), ylabel('\rho');
        % axis on, axis normal, hold on;
    P  = houghpeaks(H,5,'threshold',ceil(0.3*max(H(:))));
    %x = T(P(:,2)); y = R(P(:,1));
    %plot(x,y,'s','color','white');
    lines = houghlines(BW,T,R,P,'FillGap',5,'MinLength',7);
    %FillGap 两个线段之间的距离,小于该值会将两个线段合并
    %MinLength 最小线段长度
end

The output image obtained during the experiment is shown in the figure below:

Guess you like

Origin blog.csdn.net/Yu_Cblog/article/details/131751019