OpenCV development notes (forty-three): Red Fat Man takes you 8 minutes to gain an in-depth understanding of the cumulative probability Hough line transformation (both pictures and text + easy to understand + program source code)

If the article is an original article, it may not be reproduced without permission. The
original blogger's blog address: https://blog.csdn.net/qq21497936 The
original blogger's blog navigation: https://blog.csdn.net/qq21497936/article/details / 102478062
The blog address of this article: https://blog.csdn.net/qq21497936/article/details/105544972
readers, knowledge is infinite and manpower is poor, either change the demand, or find a professional, or research by yourself

table of Contents

Foreword

Demo

Hough Transform

Overview

Cumulative probability Hough line transformation

Overview

principle

1. The two-dimensional space of a straight line image can be represented by two variables

2. In general, for a point (x0, y0), it can be defined uniformly by a set of straight lines at this point

3. For a given point (x0, y0), all the straight lines passing through it in polar coordinates to the polar diameter and polar angle plane will get a sine curve

4. Perform the above operations on all points in the image to obtain a set of graphs

5. From the above, a straight line can be detected by finding the number of curves that intersect at a point in the plane θ-r.

Prototype of cumulative probability Hough transform function

Supplementary data function prototype

Demo source code

Project template: corresponding version number v1.38.0

Reference blog post


OpenCV development column (click on the portal)

 

    OpenCV development notes (forty-three): The red fat man takes you in 8 minutes to learn more about the cumulative probability Hough line transformation

 

Foreword

Red fat man also come! ! !

After denoising and edge detection, feature extraction is performed. One of the basic methods for identifying graphics is the Hough transform. Hough transform is a feature extraction technology in image processing. The previous article explained the Hough line transform In this chapter, we will explain the cumulative probability of Hough line transformation in Hough line transformation.

Here I will talk about the principle again to deepen your understanding. Because the 8-minute understanding of the Hough transform is not enough in the previous article. The Hough transform is one of the important detection methods. To compare the detection effects.

 

Demo

 

Hough Transform

Overview

      Hough Transform (Hough Transform) is a feature extraction technology in image processing. The modification process calculates the local maximum value of the cumulative result in a parameter space to obtain a set conforming to the specific shape as the Hough Transform result.

      The classic Hough transform is used to detect straight lines in the image. Later, the Hough transform was extended to the recognition of objects of arbitrary shapes, mostly circles and ellipses.

      The Hough transform uses the transformation between two coordinate spaces to map a curve or a straight line with the same shape in one space to a point on another coordinate control to form a peak, thereby turning the problem of detecting any shape into a statistical peak problem .

      The Hough transform in OpenCV is divided into two types, and the line transform is divided into three types, as shown below:

Cumulative probability Hough line transformation

Overview

      Hough line transformation, you can know from the name that it is actually for straight lines. Obviously, it is a method for finding straight lines. Here we pay special attention. Before using Hough line transformation, it is necessary to preprocess the image: noise reduction, edge detection , Hough line transformation only looks for straight lines, and can only recognize edge binary images, so the input can only be binarized (single channel 8-bit) images.

Hough line transformation is divided into three types, as shown below:

Hough line transformation will find a large number of lines, but some lines are actually useless.

principle

1. The two-dimensional space of a straight line image can be represented by two variables

  • In Cartesian coordinate system: it can be expressed by parameter slope and intercept (m, b);

  • In the polar coordinate system (the method adopted by the Hough transform): it can be expressed by the parameter polar diameter and polar angle (r, θ);

      The Hough transform uses polar coordinates to represent straight lines.

      So the expression of the straight line is:

      The formula for r is:

2. In general, for a point (x0, y0), it can be defined uniformly by a set of straight lines at this point

      rθ= x0 * cosθ + y0 * sinθ

      Each pair (rθ, θ) represents a straight line passing through the point (x0, y0).

3. For a given point (x0, y0), all the straight lines passing through it in polar coordinates to the polar diameter and polar angle plane will get a sine curve

For example, for a given point x0 = 8 and y0 = 6, the calculation principle is as follows:

You can get the following curve:

Only draw certain conditions, such as r> 0 and 0 <θ <2π, (note: 0 means vertical line, π / 2 degree means horizontal line);

4. Perform the above operations on all points in the image to obtain a set of graphs

If the curves obtained after the above operations at two different points intersect in the plane θ-r, it means that they pass through the same straight line.

For example, following the above example, continue to plot points x1 = 9, y1 = 4 and x2 = 12, y2 = 3 as follows:

These three curves are compared to points ( 0.925, 9.6 ) in the plane , and the coordinates represent the parameter pairs θ-r or points (x0, y0), (x1, y1) and (x2, y2) in the plane Straight line, so it is actually calculating the distance of each angle of each point in the plane. After drawing it into a curve, if 3 points cross, then the 3 points are on a straight line, as shown in the following figure:

(The number of curves that intersect at a point exceeds the threshold (the number of points on the same straight line). For example, in the schematic above, it is assumed that 3 points can form a straight line, then they can detect a straight line jump)

5. From the above, a straight line can be detected by finding the number of curves that intersect at a point in the plane θ-r.

The more curves intersect at one point, the more straight lines represented by the focal point of this point are composed of more points. In general, we can define how many curves intersect at a point by setting the threshold of points on a straight line, so that a straight line is detected.

The above is what the Hough transform does. It tracks the intersection of the corresponding curve of each point in the image. If the number of curves crossing a point exceeds the threshold (the threshold of the number of points on the same line), then the intersection can be considered The representative parameter pair (θ, rθ) is a straight line in the original image.

Prototype of cumulative probability Hough transform function

void HoughLinesP( InputArray image,
                OutputArray lines,
                double rho,
                double theta,
                int threshold,
                double minLineLength = 0,
                double maxLineGap = 0 );
  • Parameter 1: InputArray type image, 8-bit source image, single-channel binary image. You can load any original image and modify it to this format by the function, then fill in here;
  • Parameter two: Lines of type OutputArray, after calling the HoughLines function, store the output vector of the lines detected by Hough transform. Each line consists of a vector of two elements (r, θ), r represents the distance from the origin of the coordinates, θ is the angle of rotation of the radian line (0 represents a vertical line, π / 2 degrees represents a horizontal line);
  • Parameter 3: double type rho, rho is the distance of the straight line, if it is 1 pixel, then detect the straight line of length 1-> 2-> 3, if it is 2, then 2-> 4-> 6 ignore 1 and 3 Too.
  • Parameter four: theta of type double, theta is radians, even if all are lines, then the radian accuracy between lines and lines, if a point of 360 degrees, radian is π, it means that it starts from 0 °, every π radian (180 °, detect a line) ;
  • Parameter five: int type threshold , the threshold parameter of the accumulation plane, that is, the value that it must reach in the accumulation plane when identifying a part as a straight line in the figure. Greater than the threshold value threshold line segment can be detected by the result and return to the;
  • Parameter 6: minLineLength of double type , the default is 0 , which means the length of the lowest line segment, the line segment shorter than this setting parameter cannot be detected;
  • Parameter 7: maxLineGap of type double , the default is 0 , the maximum distance allowed to connect points on the same line;

Supplementary data function prototype

cvRound():返回跟参数最接近的整数值,即四舍五入;
cvFloor():返回不大于参数的最大整数值,即向下取整;
cvCeil():返回不小于参数的最小整数值,即向上取整;

 

Demo source code

void OpenCVManager::testHoughLinesP()
{
    QString fileName1 =
            "E:/qtProject/openCVDemo/openCVDemo/modules/openCVManager/images/16.jpg";
    cv::Mat srcMat = cv::imread(fileName1.toStdString());
    int width = 400;
    int height = 300;

    cv::resize(srcMat, srcMat, cv::Size(width, height));
    cv::Mat colorMat = srcMat.clone();

    cv::String windowName = _windowTitle.toStdString();
    cvui::init(windowName);

    cv::Mat windowMat = cv::Mat(cv::Size(srcMat.cols * 2, srcMat.rows * 3),
                                srcMat.type());

    cv::cvtColor(srcMat, srcMat, CV_BGR2GRAY);

    int threshold1 = 200;
    int threshold2 = 100;
    int apertureSize = 1;

    int rh0 = 1;                            // 默认1像素
    int theta = 1;                          // 默认1°
    int threshold = 100;                    // 默认检测到同一直线的100个点

    int minLineLength = 50;                 // 检测线的最小长度
    int maxLineGap = 10;                    // 同一条线点与点的最大距离

    while(true)
    {
        qDebug() << __FILE__ << __LINE__;
        windowMat = cv::Scalar(0, 0, 0);

        cv::Mat mat;
        cv::Mat dstMat;
        cv::Mat grayMat;

        // 转换为灰度图像
        // 原图先copy到左边
        cv::Mat leftMat = windowMat(cv::Range(0, srcMat.rows),
                                    cv::Range(0, srcMat.cols));
        cv::cvtColor(srcMat, grayMat, CV_GRAY2BGR);
        cv::addWeighted(leftMat, 0.0f, grayMat, 1.0f, 0.0f, leftMat);

        {
            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 0 + 20,
                         "threshold1");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 0 + 50,
                           200,
                           &threshold1,
                           0,
                           255);
            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 0 + 100, "threshold2");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 0 + 130,
                           200,
                           &threshold2,
                           0,
                           255);
            cv::Canny(srcMat, dstMat, threshold1, threshold2, apertureSize * 2 + 1);
            // copy
            mat = windowMat(cv::Range(srcMat.rows * 1, srcMat.rows * 2),
                            cv::Range(srcMat.cols * 0, srcMat.cols * 1));

            cv::cvtColor(dstMat, grayMat, CV_GRAY2BGR);
            cv::addWeighted(mat, 0.0f, grayMat, 1.0f, 0.0f, mat);

            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 1 + 20 - 120,
                         "rho / 100");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 1 + 50 - 120,
                           200,
                           &rh0,
                           1,
                           1000);
            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 1 + 100 - 120,
                         "theta = value / 2");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 1 + 130 - 120,
                           200,
                           &theta,
                           1,
                           720);
            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 1 + 180 - 120,
                         "min points");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 1 + 210 - 120,
                           200,
                           &threshold,
                           2,
                           300);

            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 1 + 260 - 120,
                         "minLineLength = value / 10");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 1 + 290 - 120,
                           200,
                           &minLineLength,
                           0,
                           1000);
            cvui::printf(windowMat,
                         width * 1 + 100,
                         height * 1 + 340 - 120,
                         "maxLineGap = value / 10");
            cvui::trackbar(windowMat,
                           width * 1 + 100,
                           height * 1 + 370 - 120,
                           200,
                           &maxLineGap,
                           0,
                           1000);

            // 边缘检测后,进行霍夫线检测
            // 使用霍夫线变化检测所有角度范围内的直线
            std::vector<cv::Vec2f> lines;
            cv::HoughLines(dstMat,                  // 输入8位
                           lines,                   // 输出线 std::vector<std::Vec2f>
                           rh0 / 100.0f,              // 初步像素精度
                           theta / 720.0 * CV_PI,   // 初步偏移角度精度
                           threshold,               // 必须达到的点的数量
                           0,                       // 标准霍夫变换,为0
                           0,                       // 标准霍夫变换,为0
                           0,                       // 检测角度范围最小为0
                           CV_PI);                  // 检测角度范围最大为π,即360°

            // 在图中绘制出每条线段
            dstMat = colorMat.clone();
            qDebug() << __FILE__ << __LINE__ << lines.size();
            for(int index = 0; index < lines.size(); index++)
            {
                float rho = lines[index][0];
                float theta = lines[index][1];
                cv::Point pt1;
                cv::Point pt2;
                double a = cos(theta);
                double b = sin(theta);
                double x0 = a * rho;
                double y0 = b * rho;
                // 计算出点的坐标
                pt1.x = cvRound(x0 + 1000 * (-b));
                pt1.y = cvRound(y0 + 1000 * (a));
                pt2.x = cvRound(x0 - 1000 * (-b));
                pt2.y = cvRound(y0 - 1000 * (a));
                // 画线
                cv::line(dstMat, pt1, pt2, cv::Scalar(0, 0, 255), 1, cv::LINE_AA);
            }
            // copy
            mat = windowMat(cv::Range(srcMat.rows * 2, srcMat.rows * 3),
                            cv::Range(srcMat.cols * 0, srcMat.cols * 1));
            cv::addWeighted(mat, 0.0f, dstMat, 1.0f, 0.0f, mat);


            // 使用概率霍夫线变化检测所有长度范围内的直线
            cv::Canny(srcMat, dstMat, threshold1, threshold2, apertureSize * 2 + 1);
            std::vector<cv::Vec4i> lines2;
            cv::HoughLinesP(dstMat,                 // 输入8位
                            lines2,                 // 输出线 std::vector<std::Vec4i>
                            rh0/100.0f,             // 初步像素精度
                            theta / 720.0 * CV_PI,  // 初步偏移角度精度
                            threshold,              // 必须达到的点的数量
                            minLineLength / 10.0f,  // 检测线的最小长度
                            maxLineGap / 10.0f);    // 检测线的最大距离
            // 在图中绘制出每条线段
            dstMat = colorMat.clone();
            for(int index = 0; index < lines.size(); index++)
            {
                // 画线
                cv::Vec4i line = lines2[index];
                cv::line(dstMat,
                         cv::Point(line[0], line[1]),
                         cv::Point(line[2], line[3]),
                         cv::Scalar(0, 0, 255),
                         1,
                         cv::LINE_AA);
            }
            // copy
            mat = windowMat(cv::Range(srcMat.rows * 2, srcMat.rows * 3),
                            cv::Range(srcMat.cols * 1, srcMat.cols * 2));
            cv::addWeighted(mat, 0.0f, dstMat, 1.0f, 0.0f, mat);
        }
        // 更新
        cvui::update();
        // 显示
        cv::imshow(windowName, windowMat);
        // esc键退出
        if(cv::waitKey(25) == 27)
        {
            break;
        }
    }
}

 

Project template: corresponding version number v1.38.0

      Corresponding version number v1.38.0

 

Reference blog post

      https://blog.csdn.net/shenziheng1/article/details/75307410

 

The original blogger's blog address: https://blog.csdn.net/qq21497936 The
original blogger's blog navigation: https://blog.csdn.net/qq21497936/article/details/102478062
The blog address of this article: https: // blog .csdn.net / qq21497936 / article / details / 105544972

Published 266 original articles · praised 457 · 530,000 views +

Guess you like

Origin blog.csdn.net/qq21497936/article/details/105544972