[QT Course Design] Six: Implementation of Coin Detection Function

Front-end articles and navigation index

Navigation index post
pre-article, fifth article in the course design

Preface

In the previous article, the video processing work has been completed. Most of the work was processed using opencv for smoothness. This article will carry out the last function, the coin detection function.

coin detection

This article will use two methods for coin detection algorithms;

1. Watershed algorithm solution

Fundamental

The more classic calculation method of watershed was proposed by L. Vincent at PAMI in 1991 [1]. The traditional watershed segmentation method is a mathematical morphological segmentation method based on topology theory. Its basic idea is to regard the image as a topological topography in geodesy. The gray value of each pixel in the image represents the altitude of that point. height, each local minimum and its area of ​​influence is called a catchment basin, and the boundaries of the catchment basin form a watershed. The concept and formation of watersheds can be illustrated by simulating the immersion process. A small hole is pierced on the surface of each local minimum, and then the entire model is slowly immersed in water. As the immersion deepens, the influence domain of each local minimum slowly expands outward. Construct a dam at the confluence of water basins as shown in the figure below to form a watershed.
Insert image description here
However, the direct watershed algorithm based on gradient images can easily lead to over-segmentation of the image. The main reason for this phenomenon is that the input image has too many extremely small areas, resulting in many small catchment basins, resulting in the segmented image being unable to Represent meaningful areas in the image. Therefore, similar regions of the segmentation results must be merged.

Principle of improved watershed algorithm

Because the traditional watershed algorithm has the disadvantage of over-segmentation, OpenCV provides an improved watershed algorithm that uses a series of predefined markers to guide the definition of image segmentation. Using OpenCV's watershed algorithm cv::wathershed, you need to input a labeled image. The pixel value of the image is a 32-bit signed positive number (CV_32S type), and each non-zero pixel represents a label. Its principle is to mark some pixels in the image to indicate that the area to which it belongs is known. The watershed algorithm can determine the region to which other pixels belong based on this initial label. The schematic diagram of the traditional gradient-based watershed algorithm and the improved marker-based watershed algorithm is shown in the figure below. As
can be seen from the above figure, the traditional gradient-based watershed algorithm causes more watersheds after segmentation due to too many local minima. As for the marker-based watershed algorithm, the waterlogging process starts from predefined marker images (pixels), which better overcomes the shortcomings of over-segmentation. Essentially, the improved algorithm based on marker points is a method that uses prior knowledge to help segmentation. Therefore, the key to improving the algorithm lies in how to obtain accurate labeled images, that is, how to accurately label foreground objects and backgrounds.

Technical implementation route

Please add image description

Implement code

//分水岭
void MainWindow::on_pushButton_7_clicked()
{
    
    

    Mat gray, thresh;
    Mat img = coinimage;
    //灰度化
    cvtColor(img, gray, COLOR_BGR2GRAY);
    //二值化
    threshold(gray, thresh, 0, 255, THRESH_BINARY_INV+CV_THRESH_OTSU);
   //侵蚀
    Mat opening; Mat sure_bg;
    Mat sure_fg; Mat unknow;
    Mat dist_transform;
    double maxValue;
    // noise removal
    Mat kernel = Mat::ones(3, 3, CV_8U);
    morphologyEx(thresh, opening, MORPH_OPEN, kernel);

    // sure background area
    dilate(opening, sure_bg, kernel, Point(-1, -1), 3);

    // Finding sure foreground area
    distanceTransform(opening, dist_transform, DIST_L2, 5);
    minMaxLoc(dist_transform, 0, &maxValue, 0, 0);
    threshold(dist_transform, sure_fg, 0.7*maxValue, 255, 0);

    // Finding unknown region
    sure_fg.convertTo(sure_fg, CV_8U);
    subtract(sure_bg, sure_fg, unknow);

    // Marker labelling
    Mat markers;
    connectedComponents(sure_fg, markers);

    // Add one to all labels so that sure background is not 0, but 1
    markers = markers + 1;

    // Now, mark the region of unknown with zero
    markers.setTo(0, unknow);

    Mat marker;
    Mat mask;
    watershed(img, markers);
    compare(markers, -1, mask, CMP_EQ);
    img.setTo(Scalar(0, 0, 255), mask);


    QImage outputpic =MatToQImage(img);
    outputpic=watermark(outputpic);
    ui->coinlabel->setPixmap(QPixmap::fromImage(ImageSetSize(outputpic,ui->coinlabel)));
}
Effect comparison

Insert image description here
Insert image description here
It can be seen that in addition to the outline of the coin, there are also many other outline lines in the picture. The effect of this method for edge detection is not satisfactory.

2. Hough transform for circle detection

Hough's ring principle

In the Hough circle transformation, three parameters need to be considered: circle radius and circle center (x coordinate, y coordinate). In OpenCV, the strategy adopted is two rounds of screening. The first round of screening finds the location where a circle may exist (the center of the circle); the second round then screens out the radius based on the results of the first round.
Similar to the two parameters used to decide whether to accept a straight line "the minimum length of a straight line (minLineLength)" and the "maximum pixel spacing allowed when accepting a straight line (MaxLineGap)", the Hough circle transform also has several parameters used to decide whether to accept a straight line. Parameters of a circle: minimum distance between circle centers, minimum radius of a circle, maximum radius of a circle.
But such an algorithm also has a big problem, that is, the radius of the circle requires manual preprocessing.

code

//霍夫曼圆

void MainWindow::on_HoughBtn_clicked()
{
    
    
    Mat src=coinimage;
    Mat grayImg;
        cvtColor(src, grayImg, CV_BGR2GRAY);
        vector<Vec3f>circles;
        int hough_value = 80;
        HoughCircles(grayImg, circles, HOUGH_GRADIENT, 1, 10, 110, hough_value, 10, 100);
        Mat houghcircle = src.clone();
        for (int i = 0; i < circles.size(); i++) {
    
    
            circle(houghcircle, Point(circles[i][0], circles[i][1]), circles[i][2], Scalar(0, 0, 255), 2);
        }
        QImage outputpic =MatToQImage(houghcircle);
        outputpic=watermark(outputpic);
        ui->coinlabel->setPixmap(QPixmap::fromImage(ImageSetSize(outputpic,ui->coinlabel)));

}

Effect comparison


Insert image description here

! ! Poor display of results! !

Insert image description here
Insert image description here

Improve ideas

Currently, we are considering using the yolo+pytorch solution for image cutting machine learning; it cannot be completed due to time considerations. If we continue to work during the winter vacation, we will update this series.

watermark

Ideas

The principle of adding watermark is actually very simple. After reading the content of the watermark image, it is enough to load it and draw points on top of the original image. There is no image here, which involves some personal privacy.

code

//水印
QImage MainWindow::watermark(QImage img)
{
    
    

    QImage simage("E:/2022QT/Work/name3.png");//这里换成你自己的水印路径

           int swidth = simage.width();
           int sheight = simage.height();
           int r,b,g;

           for(int i=0; i<sheight; ++i) {
    
    
               for(int j=0; j<swidth; ++j) {
    
    
                  QColor oldcolor2=QColor(simage.pixel(j,i));
                  r=oldcolor2.red();
                  b=oldcolor2.blue();
                  g=oldcolor2.green();

                  if(r==0&&b==0&&g==0)
                  {
    
    
                      img.setPixelColor(j,i,qRgb(0,0,0));
                  }else
                  {
    
    
                      //image.setPixelColor(j,i,qRgb(red,blue,green));
                  }
               }

           }
   return img;

}

Guess you like

Origin blog.csdn.net/weixin_43035795/article/details/128211772