Image contour edge gradient template matching cattle|opencvsharp

1.Background

At present, opencv has a variety of built-in template matching solutions, including TemplateMatch, ShapeMatch, CompareHisr and other algorithms. There are many defects in the actual application process: different sizes, different angles, and different grayscale distributions lead to many problems in the matching process. Therefore, currently In order to improve the accuracy and applicability of template matching, a new algorithm for matching templates based on edge gradient benchmarks was found through network search.

2.Principle

By collecting the gray gradients in the X and Y directions at each position of the template edge, and matching the gray gradients in the XY direction of all pixels in the effective area of ​​the query image (gradient NCC algorithm), images that meet the qualified threshold Collect its location, no more nonsense, boy! ! ! !

3. Algorithm

3.1 Data collection

The Sobel operator is used to collect the first-order derivatives of the X and Y grayscales of each pixel point in the template and query image respectively, and calculate the comprehensive gradient corresponding to the point. It will be more efficient when combined with contour query or Canny edge detection and image pyramid . At the same time, record the datum point Pb among the valid points collected by the template, and the relative coordinates P (col-pb.x, row-pb.y) of each valid point in the template relative to the just datum point.
Insert image description here

3.2 Data matching

Through the template data collected above, the gradient value of each point in the query image is matched with the gradient value of each valid point in the template, and the matching score is recorded. The matching algorithm is based on NCC template matching. The formulas are everywhere. I Just insert one,
Insert image description here
where template means the relevant data collected in the template, and source means the data collected in the image to be queried

3.3 Matching optimization

During the matching process, since the number of pixels in the image to be queried may be too many, the matching score obtained by given each pixel in the matching process can be thresholded Smin to improve efficiency. Optimizing and setting conventional thresholds may lead to early Part of the data is ignored when point matching occurs in the later stage of point mismatch. Therefore, a greedy parameter processing is performed during threshold number processing to ensure that the matching efficiency is improved.
Insert image description here![Insert picture description here](https://img-blog.csdnimg.cn/851707ccebe241bd99a1639107d3a2b8.png]

4.Code

4.1 Template effective point data collection

The amount of information related to the accurate contour of the image is obtained through the following code. Finding effective points through the contour is much more efficient than querying the gradient of all pixel points in the template and then filtering . Through comparison, the gradient value is relatively obvious;
Insert image description hereInsert image description here

static public List<ImageEdgePtInform> ImageContourEdgeInfomGet(Mat img,int margin,ref Size validSize,bool show=false)
        {
    
    
            Mat _uImg = img.Clone();
            if (_uImg.Type() != MatType.CV_8UC1)
            {
    
    
                Cv2.CvtColor(_uImg, _uImg, ColorConversionCodes.BGR2GRAY);
            }
            int edgePointCount = 0;
            Point relaOrgPt = new Point(int.MaxValue,int.MaxValue);
            List<ImageEdgePtInform> resultEdgeInforms = new List<ImageEdgePtInform>();
            Mat _blur = _uImg.GaussianBlur(new Size(3, 3), 0);
            Mat _threImg= _blur.Threshold(120, 255, ThresholdTypes.BinaryInv);
            //图像前期处理
            _threImg.FindContours(out Point[][] cnts, out HierarchyIndex[] hids, RetrievalModes.List, ContourApproximationModes.ApproxNone);
            cnts = cnts.Where(cnt => Cv2.ContourArea(cnt) > 3 && Cv2.ArcLength(cnt, false) > 3 && cnt.Length > 3).ToArray();
            cnts=cnts.ToList().OrderByDescending(cnt => Cv2.ContourArea(cnt)).ToArray();
            //找轮廓,并获取最大轮廓
            foreach (var cnt in cnts) edgePointCount += cnt.Length;
            Mat xDerivative = _uImg.Sobel(MatType.CV_64FC1, 1, 0, 3);
            Mat yDerivative = _uImg.Sobel(MatType.CV_64FC1, 0, 1, 3);
            //获取模板图像X方向Y方向梯度
         
            unsafe
            {
    
    
                foreach(var cnt in cnts)
                {
    
    
                    foreach(var pt in cnt)
                    {
    
    
                        ImageEdgePtInform ptInform = new ImageEdgePtInform();
                        int row = pt.Y, col = pt.X;
                        //获取轮廓基准点(轮廓边缘矩形左下角)
                        relaOrgPt.X = Math.Min(relaOrgPt.X, col);
                        relaOrgPt.Y = Math.Min(relaOrgPt.Y, row);
                        //获取轮廓边缘矩形右下角点
                        validSize.Width = Math.Max(validSize.Width, col);
                        validSize.Height = Math.Max(validSize.Height, row);
                        double dx = ((double*)xDerivative.Ptr(row))[col];
                        double dy = ((double*)yDerivative.Ptr(row))[col];
                        double mag = Math.Sqrt(dx * dx + dy * dy);
                        double barycentOrient = ContourBaryCenterOrientationGet(cnts[0]);
                        ptInform.DerivativeX = dx;
                        ptInform.DerivativeY = dy;
                        //获取当前点xy方向梯度
                        ptInform.Magnitude = (double)(1.0 / mag);
                        ptInform.RelativePos = pt;
                        ptInform.BarycentOrient = barycentOrient;
                        resultEdgeInforms.Add(ptInform);
                    }
                }
                foreach (var inf in resultEdgeInforms)
                {
    
    
                    inf.RelativePos = new Point(inf.RelativePos.X - relaOrgPt.X, inf.RelativePos.Y - relaOrgPt.Y);
                }
                validSize = new Size(validSize.Width - relaOrgPt.X, validSize.Height - relaOrgPt.Y);
                if (show)
                {
    
    
                    Mat _sImg = img.Clone();
                    _sImg.DrawContours(cnts, -1, Scalar.Green, 1);
                    _sImg.Circle(relaOrgPt, 2, Scalar.Red);
                    _sImg.Rectangle(new Rect(relaOrgPt, validSize), Scalar.Blue, 1);
                    _sImg = ImageBasicLineDrawing(_sImg, bcenter, orientation: ENUMS.IMAGE_PERMUTATION_TYPE.HORIZONTAL);
                    _sImg = ImageBasicLineDrawing(_sImg, bcenter, orientation: ENUMS.IMAGE_PERMUTATION_TYPE.VERTICAL);
                    _sImg.Circle(bcenter, 5, Scalar.Blue, -1);
                    ImageShow("asdsad", _sImg);
                }
            }
            return resultEdgeInforms;
        }

4.2 Matching template data with query image

During the matching process, the efficiency is relatively slow because the query image area is large, but the effect is still good. However, it seems that only the sample image whose relative position is exactly the same as the template is matched.
Insert image description hereInsert image description here

static public double ImageEdgeMatch(Mat img,List<ImageEdgePtInform> queryEdgeInforms,
            double minScore,double greediness, Size validSize, out Point conformPoints)
        {
    
    
            Mat _uImg = img.Clone();
            if (_uImg.Type() == MatType.CV_8UC3)
            {
    
    
                Cv2.CvtColor(_uImg, _uImg, ColorConversionCodes.BGR2GRAY);
            }
            Cv2.GaussianBlur(_uImg, _uImg, new Size(3, 3), 0);
            //查询图像前期处理
            int Width = _uImg.Width;
            int Height = _uImg.Height;
            int queryCount = queryEdgeInforms.Count;
            double partialScore = 0;
            double resultScore = 0;
            conformPoints =new Point();
            unsafe
            {
    
    
            //获取查询图像X方向和Y方向梯度,并计算筛选成绩相关参数
                Mat txMagnitude = _uImg.Sobel(MatType.CV_64FC1, 1, 0, 3);
                Mat tyMagnitude = _uImg.Sobel(MatType.CV_64FC1, 0, 1, 3);
                Mat torgMagnitude = Mat.Zeros(_uImg.Size(), MatType.CV_64FC1);
                double normMinScore = minScore / (double)queryCount;
                double normGreediness = ((1 - greediness * minScore) / (1 - greediness)) / queryCount;
                for(int row = 0; row < Height; row++)
                {
    
    
                    double* xMag = (double*)txMagnitude.Ptr(row);
                    double* yMag = (double*)tyMagnitude.Ptr(row);
                    double* oMag = (double*)torgMagnitude.Ptr(row);
                    for (int col = 0; col < Width; col++)
                    {
    
    
                        double dx = xMag[col], dy = yMag[col];
                        double _mag = Math.Sqrt(dx * dx + dy * dy);
                        oMag[col] = _mag;
                    }
                }
//开始匹配
                for(int row = 0; row < Height; row++)
                {
    
    
                    for (int col = 0; col < Width; col++)
                    {
    
    
                        double sum = 0;
                        double corSum = 0;
                        bool flag = false;
                        for (int cn = 0; cn < queryCount; cn++)
                        {
    
    
                            int xoff = queryEdgeInforms[cn].RelativePos.X;
                            int yoff = queryEdgeInforms[cn].RelativePos.Y;
                            //回去相对基准点的查询图像位置
                            int relaX = xoff + col;
                            int relaY = yoff + row;
                            if (relaY >= Height || relaX >= Width)
                            {
    
    
                                continue;
                            }
                            double txD = ((double*)txMagnitude.Ptr(relaY))[relaX];
                            double tyD = ((double*)tyMagnitude.Ptr(relaY))[relaX];
                            double tMag = ((double*)torgMagnitude.Ptr(relaY))[relaX];
                            double qxD = queryEdgeInforms[cn].DerivativeX;
                            double qyD = queryEdgeInforms[cn].DerivativeY;
                            double qMag = queryEdgeInforms[cn].Magnitude;
                            if((txD!=0 || tyD != 0) && (qxD != 0 || qyD != 0))
                            {
    
    
                                sum += (txD * qxD + tyD * qyD) * qMag / tMag;
                            }
                            corSum += 1;
                            partialScore = sum / corSum;
                            double curJudge = Math.Min((minScore - 1) + normGreediness * normMinScore, normMinScore * corSum);
                            //匹配分数不满足则进行下一个点匹配
                            if (partialScore < curJudge)
                            {
    
    
                                break;
                            }
                        }
                        if (partialScore > resultScore)
                        {
    
    
                            resultScore = partialScore;
                            conformPoints = new Point(col, row);
                        }

                    }
                }

                //二次筛选
                if (resultScore > 0.5)
                {
    
    
                    if(conformPoints.X+validSize.Width>Width||
                        conformPoints.Y + validSize.Height > Height)
                    {
    
    
                        resultScore = 0;
                    }
                }

                return resultScore;
            }


        }

4.3 Process optimization

Through Part 4.2, we can clearly understand that the edge gray gradient matching effect is still good, but there are still some defects: only image positions with consistent relative positions can be obtained, and the query efficiency is low ; for these two problems, you can first query The image is segmented and then matched to the same angle , and then data collection and related operations of matching the template data are performed, so that the effect is significantly improved.
Insert image description here

Insert image description hereInsert image description here

{
    
    //图像轮廓边缘匹配
                Mat _sImg = detectImg.Clone();
                Mat queryImg = patternImg.Clone();
                Mat trainImg = detectImg.Clone();
                Mat queryGray = ImageGrayDetect(queryImg);
                Mat trainGray = ImageGrayDetect(trainImg);
                Mat queBlur = queryGray.GaussianBlur(new Size(3, 3), 0);
                Mat trainBlur = trainGray.GaussianBlur(new Size(3, 3), 0);
                List<Size> templateValidSize = new List<Size>();
                List<List<ImageEdgePtInform>> templatesEdgeInforms = new List<List<ImageEdgePtInform>>();
                ImageGeometricData queryGeoInform = new ImageGeometricData();
                ImageGeometricData trainGeoInform = new ImageGeometricData();
                Rect[] querySubBoundRects = BbProcesser.ImageConnectedFieldSegment(queBlur, 120, true, show: false).ConnectedFieldDatas.FieldRects.ToArray();
                Rect[] trainSubBoundRects = BbProcesser.ImageConnectedFieldSegment(trainBlur, 120, true, show: false).ConnectedFieldDatas.FieldRects.ToArray();
                Mat[] querySubImgs = new Mat[querySubBoundRects.Length];
                Mat[] trianSubImgs = new Mat[trainSubBoundRects.Length];
                List<Rect> matchedConformRegions = new List<Rect>();
                int margin = 5;
                for (int i =0;i< querySubImgs.Length;i++)
                {
    
    
                    Rect selectRegion = querySubBoundRects[i];
                    if(selectRegion.X-margin>=0 
                        && selectRegion.Width+margin<queBlur.Width
                        && selectRegion.Y-margin>=0
                        && selectRegion.Height + margin < queBlur.Height)
                    {
    
    
                        selectRegion.X -= margin;
                        selectRegion.Y -= margin;
                        selectRegion.Width += 2 * margin;
                        selectRegion.Height += 2 * margin;
                    }
                    querySubImgs[i] = new Mat(patternImg.Clone(), selectRegion);
                }
                for(int i = 0; i < trianSubImgs.Length; i++)
                {
    
    
                    Rect selectRegion = trainSubBoundRects[i];
                    if (selectRegion.X - margin >= 0
                        && selectRegion.Width + 2*margin < trainBlur.Width
                        && selectRegion.Y - margin >= 0
                        && selectRegion.Height + 2*margin < trainBlur.Height)
                    {
    
    
                        selectRegion.X -= margin;
                        selectRegion.Y -= margin;
                        selectRegion.Width += 2 * margin;
                        selectRegion.Height += 2 * margin;
                    }
                    trianSubImgs[i] = new Mat(detectImg.Clone(), selectRegion);
                }
                unsafe
                {
    
    
                    double[] querySubOrients = new double[querySubImgs.Length];
                    for(int i = 0; i < querySubImgs.Length; i++)
                    {
    
    
                        Size validSize = new Size();
                        List<ImageEdgePtInform> subEdgeInforms = GeoProcesser.ImageContourEdgeInfomGet(querySubImgs[i].Clone(), 3,ref validSize,show:true);
                        templateValidSize.Add(validSize);
                        templatesEdgeInforms.Add(subEdgeInforms);
                    }
                    for(int i = 0; i < trianSubImgs.Length; i++)
                    {
    
    
                        for(int j = 0; j < templateValidSize.Count; j++)
                        {
    
    
                            Point outPt = new Point();
                            //检测图像与模板角度不匹配时旋转
                            Mat _tImg = trianSubImgs[i].Clone();
                            {
    
    
                                _tImg = ImageGrayDetect(_tImg);
                                Cv2.GaussianBlur(_tImg, _tImg, new Size(3, 3), 0);
                                _tImg = _tImg.Threshold(120, 255, ThresholdTypes.BinaryInv);
                                Cv2.FindContours(_tImg, out Point[][] tCnts, out HierarchyIndex[] hidxs, RetrievalModes.External, ContourApproximationModes.ApproxSimple);
                                tCnts = tCnts.ToList().OrderByDescending(cnt => Cv2.ContourArea(cnt)).ToArray();
                                double tOrient = GeometricProcessor.ContourBaryCenterOrientationGet(tCnts[0]);
                                if (tOrient != templatesEdgeInforms[j][0].BarycentOrient)
                                {
    
    
                                    double rOrient = templatesEdgeInforms[j][0].BarycentOrient - tOrient;
                                    _tImg = ImageRotate(trianSubImgs[i].Clone(), new Point2f(_tImg.Width / 2, _tImg.Height / 2), rOrient, show: false);
                                }
                            }
                            
                            double score1 = GeometricProcessor.ImageEdgeMatch(trianSubImgs[i].Clone(), templatesEdgeInforms[j],
                                0.9, 1, templateValidSize[j], out outPt);
                            double score2 =GeometricProcessor.ImageEdgeMatch(_tImg, templatesEdgeInforms[j], 
                                0.9, 1, templateValidSize[j],out outPt);
                            if (Math.Max(score1,score2) > 0.9)
                            {
    
    
                                matchedConformRegions.Add(trainSubBoundRects[i]);
                                _sImg.Rectangle(trainSubBoundRects[i], Scalar.Red, 1) ;
                            }
                        }
                    }
                    if (matchedConformRegions.Count != 0)
                    {
    
    
                        Mat _attachedImg = ImagesMerge(querySubImgs, new Mat());
                        _sImg = ImagesMerge(new Mat[] {
    
     _attachedImg }, _sImg);
                        ImageShow("GradientDetectResult", _sImg);
                    }
                }
            }

5. Summary

During the matching process, there are still many problems found, such as how to deal with when the size of the template and the query object are inconsistent? How to match when the area of ​​the occluded part is too large, etc. . . take it easy

6.Reference

The algorithm and image reference are as follows:
Function related

Guess you like

Origin blog.csdn.net/JAYLEE900/article/details/131475498