OpenCV image stitching and image fusion technology

Image stitching has a wide range of practical application scenarios, such as drone aerial photography, remote sensing images, etc. Image stitching is a basic step for further image understanding. The quality of the stitching effect directly affects the next work, so a good image stitching algorithm Very important.

Let me give another example around you. You use your mobile phone to take pictures of a certain scene, but you can’t take all the scenes you want to shoot at once, so you take several pictures of the scene from left to right. Take pictures to record all the scenes you want to shoot. So can we stitch these images into one big picture? We can use opencv to achieve the effect of image stitching!

The benefits of this article, free to receive Qt development learning materials package, technical video, including (Qt actual combat project, C++ language foundation, C++ design mode, introduction to Qt programming, QT signal and slot mechanism, QT interface development-image drawing, QT network, QT database programming, QT project combat, QSS, OpenCV, Quick module, interview questions, etc.) ↓↓↓↓↓↓See below↓↓Click on the bottom of the article to receive the fee↓↓

For example, we have spliced ​​these two images.

 

As can be seen from the above two pictures, the two pictures have a lot of overlapping parts, which is also the basic requirement for splicing.

So how many steps are needed to achieve image stitching? Simply put, there are the following steps:

  1. Extract feature points from each image
  2. Match the feature points
  3. Perform image registration
  4. Copy an image to a specific location in another image
  5. Special handling for overlapping borders

Well, let's start officially implementing image registration.

The first step is feature point extraction. Now there are many definitions of feature points in the CV field. For example, sift, surf, harris corner, and ORB are all well-known feature factors, which can be used for image stitching. They have their own advantages. This article will use ORB and SURF for image stitching, and stitching with other methods is similar.

Image stitching based on SURF

Using the SIFT algorithm to achieve image stitching is a very common method, but because SIFT has a large amount of calculation, it is no longer applicable in occasions with high speed requirements. Therefore, its improved method SURF has a significant improvement in speed (the speed is 3 times that of SIFT), so it still has a lot to do in the field of image stitching. Although the accuracy and stability of SURF are not as good as SIFT, its comprehensive ability is still superior. The main steps of stitching are described in detail below.

1. Feature point extraction and matching

//提取特征点
SurfFeatureDetector Detector(2000);
vector<KeyPoint> keyPoint1, keyPoint2;
Detector.detect(image1, keyPoint1);
Detector.detect(image2, keyPoint2);

//特征点描述,为下边的特征点匹配做准备
SurfDescriptorExtractor Descriptor;
Mat imageDesc1, imageDesc2;
Descriptor.compute(image1, keyPoint1, imageDesc1);
Descriptor.compute(image2, keyPoint2, imageDesc2);

FlannBasedMatcher matcher;
vector<vector<DMatch> > matchePoints;
vector<DMatch> GoodMatchePoints;

vector<Mat> train_desc(1, imageDesc1);
matcher.add(train_desc);
matcher.train();

matcher.knnMatch(imageDesc2, matchePoints, 2);
cout << "total match points: " << matchePoints.size() << endl;

// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchePoints.size(); i++)
{
    if (matchePoints[i][0].distance < 0.4 * matchePoints[i][1].distance)
    {
        GoodMatchePoints.push_back(matchePoints[i][0]);
    }
}

Mat first_match;
drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
imshow("first_match ", first_match);

 

2. Image registration

In this way, we can get the matching point sets of the two images to be stitched. Next, we perform image registration, that is, convert the two images to the same coordinates. Here we need to use the findHomography function to obtain the transformation matrix. But it should be noted that the point set used by the findHomography function is of Point2f type, so we need to do another process on the point set GoodMatchePoints we just obtained to convert it into a point set of Point2f type.

vector<Point2f> imagePoints1, imagePoints2;

for (int i = 0; i<GoodMatchePoints.size(); i++)
{
    imagePoints2.push_back(keyPoint2[GoodMatchePoints[i].queryIdx].pt);
    imagePoints1.push_back(keyPoint1[GoodMatchePoints[i].trainIdx].pt);
}

In this way, we can take imagePoints1 and imagePoints2 to find the transformation matrix and realize image registration. It is worth noting that we have selected CV_RANSAC in the parameters of the findHomography function, which indicates that we choose the RANSAC algorithm to continue to filter reliable matching points, which makes the matching point solution more accurate.

//获取图像1到图像2的投影映射矩阵 尺寸为3*3
Mat homo = findHomography(imagePoints1, imagePoints2, CV_RANSAC);
也可以使用getPerspectiveTransform方法获得透视变换矩阵,不过要求只能有4个点,效果稍差
//Mat   homo=getPerspectiveTransform(imagePoints1,imagePoints2);
cout << "变换矩阵为:\n" << homo << endl << endl; //输出映射矩阵

//图像配准
Mat imageTransform1, imageTransform2;
warpPerspective(image01, imageTransform1, homo, Size(MAX(corners.right_top.x, corners.right_bottom.x), image02.rows));
//warpPerspective(image01, imageTransform2, adjustMat*homo, Size(image02.cols*1.3, image02.rows*1.8));
imshow("直接经过透视矩阵变换", imageTransform1);
imwrite("trans1.jpg", imageTransform1);

 

3. Image copying

The idea of ​​copying is very simple, just copy the left image directly to the registration image.

//创建拼接后的图,需提前计算图的大小
int dst_width = imageTransform1.cols;  //取最右点的长度为拼接图的长度
int dst_height = image02.rows;

Mat dst(dst_height, dst_width, CV_8UC3);
dst.setTo(0);

imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
image02.copyTo(dst(Rect(0, 0, image02.cols, image02.rows)));

imshow("b_dst", dst);

 

4. Image fusion (de-crack processing)

It can be seen from the above figure that the splicing of the two images is not natural. The reason lies in the junction of the splicing images. The transition of the two images at the junction of the two images is very bad due to the reason of the light color, so special processing is required to solve this inconsistency. nature. The processing idea here is weighted fusion. In the overlapping part, the previous image is slowly transitioned to the second image, that is, the pixel values ​​in the overlapping area of ​​the image are added according to a certain weight to synthesize a new image.

//优化两图的连接处,使得拼接自然
void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst)
{
    int start = MIN(corners.left_top.x, corners.left_bottom.x);//开始位置,即重叠区域的左边界

    double processWidth = img1.cols - start;//重叠区域的宽度
    int rows = dst.rows;
    int cols = img1.cols; //注意,是列数*通道数
    double alpha = 1;//img1中像素的权重
    for (int i = 0; i < rows; i++)
    {
        uchar* p = img1.ptr<uchar>(i);  //获取第i行的首地址
        uchar* t = trans.ptr<uchar>(i);
        uchar* d = dst.ptr<uchar>(i);
        for (int j = start; j < cols; j++)
        {
            //如果遇到图像trans中无像素的黑点,则完全拷贝img1中的数据
            if (t[j * 3] == 0 && t[j * 3 + 1] == 0 && t[j * 3 + 2] == 0)
            {
                alpha = 1;
            }
            else
            {
                //img1中像素的权重,与当前处理点距重叠区域左边界的距离成正比,实验证明,这种方法确实好
                alpha = (processWidth - (j - start)) / processWidth;
            }

            d[j * 3] = p[j * 3] * alpha + t[j * 3] * (1 - alpha);
            d[j * 3 + 1] = p[j * 3 + 1] * alpha + t[j * 3 + 1] * (1 - alpha);
            d[j * 3 + 2] = p[j * 3 + 2] * alpha + t[j * 3 + 2] * (1 - alpha);

        }
    }
}

Try a few more photos to verify the splicing effect

test one

 

test two

 

Finally, the splicing code of the complete SURF algorithm implementation is given.

#include "highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/legacy/legacy.hpp"
#include <iostream>

using namespace cv;
using namespace std;

void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst);

typedef struct
{
    Point2f left_top;
    Point2f left_bottom;
    Point2f right_top;
    Point2f right_bottom;
}four_corners_t;

four_corners_t corners;

void CalcCorners(const Mat& H, const Mat& src)
{
    double v2[] = { 0, 0, 1 };//左上角
    double v1[3];//变换后的坐标值
    Mat V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    Mat V1 = Mat(3, 1, CV_64FC1, v1);  //列向量

    V1 = H * V2;
    //左上角(0,0,1)
    cout << "V2: " << V2 << endl;
    cout << "V1: " << V1 << endl;
    corners.left_top.x = v1[0] / v1[2];
    corners.left_top.y = v1[1] / v1[2];

    //左下角(0,src.rows,1)
    v2[0] = 0;
    v2[1] = src.rows;
    v2[2] = 1;
    V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    V1 = Mat(3, 1, CV_64FC1, v1);  //列向量
    V1 = H * V2;
    corners.left_bottom.x = v1[0] / v1[2];
    corners.left_bottom.y = v1[1] / v1[2];

    //右上角(src.cols,0,1)
    v2[0] = src.cols;
    v2[1] = 0;
    v2[2] = 1;
    V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    V1 = Mat(3, 1, CV_64FC1, v1);  //列向量
    V1 = H * V2;
    corners.right_top.x = v1[0] / v1[2];
    corners.right_top.y = v1[1] / v1[2];

    //右下角(src.cols,src.rows,1)
    v2[0] = src.cols;
    v2[1] = src.rows;
    v2[2] = 1;
    V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    V1 = Mat(3, 1, CV_64FC1, v1);  //列向量
    V1 = H * V2;
    corners.right_bottom.x = v1[0] / v1[2];
    corners.right_bottom.y = v1[1] / v1[2];

}

int main(int argc, char *argv[])
{
    Mat image01 = imread("g5.jpg", 1);    //右图
    Mat image02 = imread("g4.jpg", 1);    //左图
    imshow("p2", image01);
    imshow("p1", image02);

    //灰度图转换
    Mat image1, image2;
    cvtColor(image01, image1, CV_RGB2GRAY);
    cvtColor(image02, image2, CV_RGB2GRAY);


    //提取特征点
    SurfFeatureDetector Detector(2000);
    vector<KeyPoint> keyPoint1, keyPoint2;
    Detector.detect(image1, keyPoint1);
    Detector.detect(image2, keyPoint2);

    //特征点描述,为下边的特征点匹配做准备
    SurfDescriptorExtractor Descriptor;
    Mat imageDesc1, imageDesc2;
    Descriptor.compute(image1, keyPoint1, imageDesc1);
    Descriptor.compute(image2, keyPoint2, imageDesc2);

    FlannBasedMatcher matcher;
    vector<vector<DMatch> > matchePoints;
    vector<DMatch> GoodMatchePoints;

    vector<Mat> train_desc(1, imageDesc1);
    matcher.add(train_desc);
    matcher.train();

    matcher.knnMatch(imageDesc2, matchePoints, 2);
    cout << "total match points: " << matchePoints.size() << endl;

    // Lowe's algorithm,获取优秀匹配点
    for (int i = 0; i < matchePoints.size(); i++)
    {
        if (matchePoints[i][0].distance < 0.4 * matchePoints[i][1].distance)
        {
            GoodMatchePoints.push_back(matchePoints[i][0]);
        }
    }

    Mat first_match;
    drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
    imshow("first_match ", first_match);

    vector<Point2f> imagePoints1, imagePoints2;

    for (int i = 0; i<GoodMatchePoints.size(); i++)
    {
        imagePoints2.push_back(keyPoint2[GoodMatchePoints[i].queryIdx].pt);
        imagePoints1.push_back(keyPoint1[GoodMatchePoints[i].trainIdx].pt);
    }



    //获取图像1到图像2的投影映射矩阵 尺寸为3*3
    Mat homo = findHomography(imagePoints1, imagePoints2, CV_RANSAC);
    也可以使用getPerspectiveTransform方法获得透视变换矩阵,不过要求只能有4个点,效果稍差
    //Mat   homo=getPerspectiveTransform(imagePoints1,imagePoints2);
    cout << "变换矩阵为:\n" << homo << endl << endl; //输出映射矩阵

   //计算配准图的四个顶点坐标
    CalcCorners(homo, image01);
    cout << "left_top:" << corners.left_top << endl;
    cout << "left_bottom:" << corners.left_bottom << endl;
    cout << "right_top:" << corners.right_top << endl;
    cout << "right_bottom:" << corners.right_bottom << endl;

    //图像配准
    Mat imageTransform1, imageTransform2;
    warpPerspective(image01, imageTransform1, homo, Size(MAX(corners.right_top.x, corners.right_bottom.x), image02.rows));
    //warpPerspective(image01, imageTransform2, adjustMat*homo, Size(image02.cols*1.3, image02.rows*1.8));
    imshow("直接经过透视矩阵变换", imageTransform1);
    imwrite("trans1.jpg", imageTransform1);


    //创建拼接后的图,需提前计算图的大小
    int dst_width = imageTransform1.cols;  //取最右点的长度为拼接图的长度
    int dst_height = image02.rows;

    Mat dst(dst_height, dst_width, CV_8UC3);
    dst.setTo(0);

    imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
    image02.copyTo(dst(Rect(0, 0, image02.cols, image02.rows)));

    imshow("b_dst", dst);


    OptimizeSeam(image02, imageTransform1, dst);


    imshow("dst", dst);
    imwrite("dst.jpg", dst);

    waitKey();

    return 0;
}


//优化两图的连接处,使得拼接自然
void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst)
{
    int start = MIN(corners.left_top.x, corners.left_bottom.x);//开始位置,即重叠区域的左边界

    double processWidth = img1.cols - start;//重叠区域的宽度
    int rows = dst.rows;
    int cols = img1.cols; //注意,是列数*通道数
    double alpha = 1;//img1中像素的权重
    for (int i = 0; i < rows; i++)
    {
        uchar* p = img1.ptr<uchar>(i);  //获取第i行的首地址
        uchar* t = trans.ptr<uchar>(i);
        uchar* d = dst.ptr<uchar>(i);
        for (int j = start; j < cols; j++)
        {
            //如果遇到图像trans中无像素的黑点,则完全拷贝img1中的数据
            if (t[j * 3] == 0 && t[j * 3 + 1] == 0 && t[j * 3 + 2] == 0)
            {
                alpha = 1;
            }
            else
            {
                //img1中像素的权重,与当前处理点距重叠区域左边界的距离成正比,实验证明,这种方法确实好
                alpha = (processWidth - (j - start)) / processWidth;
            }

            d[j * 3] = p[j * 3] * alpha + t[j * 3] * (1 - alpha);
            d[j * 3 + 1] = p[j * 3 + 1] * alpha + t[j * 3 + 1] * (1 - alpha);
            d[j * 3 + 2] = p[j * 3 + 2] * alpha + t[j * 3 + 2] * (1 - alpha);

        }
    }
}

The benefits of this article, free to receive Qt development learning materials package, technical video, including (Qt actual combat project, C++ language foundation, C++ design mode, introduction to Qt programming, QT signal and slot mechanism, QT interface development-image drawing, QT network, QT database programming, QT project combat, QSS, OpenCV, Quick module, interview questions, etc.) ↓↓↓↓↓↓See below↓↓Click on the bottom of the article to receive the fee↓↓

Image mosaic based on ORB

The idea of ​​using ORB for image stitching is basically the same as the above idea, but the way of feature extraction and feature point matching is slightly different. I won’t introduce the idea in detail here, just paste the code to see the effect.

#include "highgui/highgui.hpp"
#include "opencv2/nonfree/nonfree.hpp"
#include "opencv2/legacy/legacy.hpp"
#include <iostream>

using namespace cv;
using namespace std;

void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst);

typedef struct
{
    Point2f left_top;
    Point2f left_bottom;
    Point2f right_top;
    Point2f right_bottom;
}four_corners_t;

four_corners_t corners;

void CalcCorners(const Mat& H, const Mat& src)
{
    double v2[] = { 0, 0, 1 };//左上角
    double v1[3];//变换后的坐标值
    Mat V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    Mat V1 = Mat(3, 1, CV_64FC1, v1);  //列向量

    V1 = H * V2;
    //左上角(0,0,1)
    cout << "V2: " << V2 << endl;
    cout << "V1: " << V1 << endl;
    corners.left_top.x = v1[0] / v1[2];
    corners.left_top.y = v1[1] / v1[2];

    //左下角(0,src.rows,1)
    v2[0] = 0;
    v2[1] = src.rows;
    v2[2] = 1;
    V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    V1 = Mat(3, 1, CV_64FC1, v1);  //列向量
    V1 = H * V2;
    corners.left_bottom.x = v1[0] / v1[2];
    corners.left_bottom.y = v1[1] / v1[2];

    //右上角(src.cols,0,1)
    v2[0] = src.cols;
    v2[1] = 0;
    v2[2] = 1;
    V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    V1 = Mat(3, 1, CV_64FC1, v1);  //列向量
    V1 = H * V2;
    corners.right_top.x = v1[0] / v1[2];
    corners.right_top.y = v1[1] / v1[2];

    //右下角(src.cols,src.rows,1)
    v2[0] = src.cols;
    v2[1] = src.rows;
    v2[2] = 1;
    V2 = Mat(3, 1, CV_64FC1, v2);  //列向量
    V1 = Mat(3, 1, CV_64FC1, v1);  //列向量
    V1 = H * V2;
    corners.right_bottom.x = v1[0] / v1[2];
    corners.right_bottom.y = v1[1] / v1[2];

}

int main(int argc, char *argv[])
{
    Mat image01 = imread("t1.jpg", 1);    //右图
    Mat image02 = imread("t2.jpg", 1);    //左图
    imshow("p2", image01);
    imshow("p1", image02);

    //灰度图转换
    Mat image1, image2;
    cvtColor(image01, image1, CV_RGB2GRAY);
    cvtColor(image02, image2, CV_RGB2GRAY);


    //提取特征点
    OrbFeatureDetector  surfDetector(3000);
    vector<KeyPoint> keyPoint1, keyPoint2;
    surfDetector.detect(image1, keyPoint1);
    surfDetector.detect(image2, keyPoint2);

    //特征点描述,为下边的特征点匹配做准备
    OrbDescriptorExtractor  SurfDescriptor;
    Mat imageDesc1, imageDesc2;
    SurfDescriptor.compute(image1, keyPoint1, imageDesc1);
    SurfDescriptor.compute(image2, keyPoint2, imageDesc2);

    flann::Index flannIndex(imageDesc1, flann::LshIndexParams(12, 20, 2), cvflann::FLANN_DIST_HAMMING);

    vector<DMatch> GoodMatchePoints;

    Mat macthIndex(imageDesc2.rows, 2, CV_32SC1), matchDistance(imageDesc2.rows, 2, CV_32FC1);
    flannIndex.knnSearch(imageDesc2, macthIndex, matchDistance, 2, flann::SearchParams());

    // Lowe's algorithm,获取优秀匹配点
    for (int i = 0; i < matchDistance.rows; i++)
    {
        if (matchDistance.at<float>(i, 0) < 0.4 * matchDistance.at<float>(i, 1))
        {
            DMatch dmatches(i, macthIndex.at<int>(i, 0), matchDistance.at<float>(i, 0));
            GoodMatchePoints.push_back(dmatches);
        }
    }

    Mat first_match;
    drawMatches(image02, keyPoint2, image01, keyPoint1, GoodMatchePoints, first_match);
    imshow("first_match ", first_match);

    vector<Point2f> imagePoints1, imagePoints2;

    for (int i = 0; i<GoodMatchePoints.size(); i++)
    {
        imagePoints2.push_back(keyPoint2[GoodMatchePoints[i].queryIdx].pt);
        imagePoints1.push_back(keyPoint1[GoodMatchePoints[i].trainIdx].pt);
    }



    //获取图像1到图像2的投影映射矩阵 尺寸为3*3
    Mat homo = findHomography(imagePoints1, imagePoints2, CV_RANSAC);
    也可以使用getPerspectiveTransform方法获得透视变换矩阵,不过要求只能有4个点,效果稍差
    //Mat   homo=getPerspectiveTransform(imagePoints1,imagePoints2);
    cout << "变换矩阵为:\n" << homo << endl << endl; //输出映射矩阵

                                                //计算配准图的四个顶点坐标
    CalcCorners(homo, image01);
    cout << "left_top:" << corners.left_top << endl;
    cout << "left_bottom:" << corners.left_bottom << endl;
    cout << "right_top:" << corners.right_top << endl;
    cout << "right_bottom:" << corners.right_bottom << endl;

    //图像配准
    Mat imageTransform1, imageTransform2;
    warpPerspective(image01, imageTransform1, homo, Size(MAX(corners.right_top.x, corners.right_bottom.x), image02.rows));
    //warpPerspective(image01, imageTransform2, adjustMat*homo, Size(image02.cols*1.3, image02.rows*1.8));
    imshow("直接经过透视矩阵变换", imageTransform1);
    imwrite("trans1.jpg", imageTransform1);


    //创建拼接后的图,需提前计算图的大小
    int dst_width = imageTransform1.cols;  //取最右点的长度为拼接图的长度
    int dst_height = image02.rows;

    Mat dst(dst_height, dst_width, CV_8UC3);
    dst.setTo(0);

    imageTransform1.copyTo(dst(Rect(0, 0, imageTransform1.cols, imageTransform1.rows)));
    image02.copyTo(dst(Rect(0, 0, image02.cols, image02.rows)));

    imshow("b_dst", dst);


    OptimizeSeam(image02, imageTransform1, dst);


    imshow("dst", dst);
    imwrite("dst.jpg", dst);

    waitKey();

    return 0;
}


//优化两图的连接处,使得拼接自然
void OptimizeSeam(Mat& img1, Mat& trans, Mat& dst)
{
    int start = MIN(corners.left_top.x, corners.left_bottom.x);//开始位置,即重叠区域的左边界

    double processWidth = img1.cols - start;//重叠区域的宽度
    int rows = dst.rows;
    int cols = img1.cols; //注意,是列数*通道数
    double alpha = 1;//img1中像素的权重
    for (int i = 0; i < rows; i++)
    {
        uchar* p = img1.ptr<uchar>(i);  //获取第i行的首地址
        uchar* t = trans.ptr<uchar>(i);
        uchar* d = dst.ptr<uchar>(i);
        for (int j = start; j < cols; j++)
        {
            //如果遇到图像trans中无像素的黑点,则完全拷贝img1中的数据
            if (t[j * 3] == 0 && t[j * 3 + 1] == 0 && t[j * 3 + 2] == 0)
            {
                alpha = 1;
            }
            else
            {
                //img1中像素的权重,与当前处理点距重叠区域左边界的距离成正比,实验证明,这种方法确实好
                alpha = (processWidth - (j - start)) / processWidth;
            }

            d[j * 3] = p[j * 3] * alpha + t[j * 3] * (1 - alpha);
            d[j * 3 + 1] = p[j * 3 + 1] * alpha + t[j * 3 + 1] * (1 - alpha);
            d[j * 3 + 2] = p[j * 3 + 2] * alpha + t[j * 3 + 2] * (1 - alpha);

        }
    }
}

Looking at the splicing effect, I think it is still good.

  

Take a look at this group of pictures, this group of pictures produces ghost images, why? Because the characters in the two pictures moved around! Therefore, to do image stitching, try to ensure that the static images are used, and do not add some dynamic factors to interfere with the stitching.

 

 

The splicing algorithm stitch that comes with opencv

In fact, opencv has its own image stitching algorithm. Of course, the effect is quite good, but because its implementation is very complicated, and the amount of code is huge, in fact, the stitching in some small applications feels a bit overkill. When reading the source code of sticth recently, I found several interesting places.

1. The feature detection method selected by opencv stitch

I have always been curious about which algorithm the opencv stitch algorithm chooses as its feature detection method, is it ORB, SIFT or SURF? Read the source code and finally see the answer.

#ifdef HAVE_OPENCV_NONFREE
        stitcher.setFeaturesFinder(new detail::SurfFeaturesFinder());
#else
        stitcher.setFeaturesFinder(new detail::OrbFeaturesFinder());
#endif

In the source code createDefault function (default setting), the first choice is SURF, and the second choice is ORB (only if there is no NONFREE module), so since the big cows choose this way, it must be considered comprehensively, so the SURF algorithm should be in Image stitching has a better effect.

2. The way opencv stitch obtains matching points

The following code is the feature point extraction part of the opencv stitch source code. The author used two feature point extraction ideas: first perform feature point extraction and filter matching (1->2) on Figure 1, and then perform feature point extraction on Figure 2. Extraction and matching (2->1), which is different from our usual idea of ​​primary extraction. This secondary extraction idea can ensure that more matching points are selected. The more matching points, the more accurate the transformation obtained by findHomography . This idea is worth learning from.

matches_info.matches.clear();

Ptr<flann::IndexParams> indexParams = new flann::KDTreeIndexParams();
Ptr<flann::SearchParams> searchParams = new flann::SearchParams();

if (features2.descriptors.depth() == CV_8U)
{
    indexParams->setAlgorithm(cvflann::FLANN_INDEX_LSH);
    searchParams->setAlgorithm(cvflann::FLANN_INDEX_LSH);
}

FlannBasedMatcher matcher(indexParams, searchParams);
vector< vector<DMatch> > pair_matches;
MatchesSet matches;

// Find 1->2 matches
matcher.knnMatch(features1.descriptors, features2.descriptors, pair_matches, 2);
for (size_t i = 0; i < pair_matches.size(); ++i)
{
    if (pair_matches[i].size() < 2)
        continue;
    const DMatch& m0 = pair_matches[i][0];
    const DMatch& m1 = pair_matches[i][1];
    if (m0.distance < (1.f - match_conf_) * m1.distance)
    {
        matches_info.matches.push_back(m0);
        matches.insert(make_pair(m0.queryIdx, m0.trainIdx));
    }
}
LOG("\n1->2 matches: " << matches_info.matches.size() << endl);

// Find 2->1 matches
pair_matches.clear();
matcher.knnMatch(features2.descriptors, features1.descriptors, pair_matches, 2);
for (size_t i = 0; i < pair_matches.size(); ++i)
{
    if (pair_matches[i].size() < 2)
        continue;
    const DMatch& m0 = pair_matches[i][0];
    const DMatch& m1 = pair_matches[i][1];
    if (m0.distance < (1.f - match_conf_) * m1.distance)
        if (matches.find(make_pair(m0.trainIdx, m0.queryIdx)) == matches.end())
            matches_info.matches.push_back(DMatch(m0.trainIdx, m0.queryIdx, m0.distance));
}
LOG("1->2 & 2->1 matches: " << matches_info.matches.size() << endl);

Here I rewrite my original splicing code according to the idea of ​​extracting feature points twice from the opencv source code. The experiment proves that the obtained matching points are indeed more than one-time extraction.

//提取特征点
SiftFeatureDetector Detector(1000);  // 海塞矩阵阈值,在这里调整精度,值越大点越少,越精准
vector<KeyPoint> keyPoint1, keyPoint2;
Detector.detect(image1, keyPoint1);
Detector.detect(image2, keyPoint2);

//特征点描述,为下边的特征点匹配做准备
SiftDescriptorExtractor Descriptor;
Mat imageDesc1, imageDesc2;
Descriptor.compute(image1, keyPoint1, imageDesc1);
Descriptor.compute(image2, keyPoint2, imageDesc2);

FlannBasedMatcher matcher;
vector<vector<DMatch> > matchePoints;
vector<DMatch> GoodMatchePoints;

MatchesSet matches;

vector<Mat> train_desc(1, imageDesc1);
matcher.add(train_desc);
matcher.train();

matcher.knnMatch(imageDesc2, matchePoints, 2);

// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchePoints.size(); i++)
{
    if (matchePoints[i][0].distance < 0.4 * matchePoints[i][1].distance)
    {
        GoodMatchePoints.push_back(matchePoints[i][0]);
        matches.insert(make_pair(matchePoints[i][0].queryIdx, matchePoints[i][0].trainIdx));
    }
}
cout<<"\n1->2 matches: " << GoodMatchePoints.size() << endl;

#if 1

FlannBasedMatcher matcher2;
matchePoints.clear();
vector<Mat> train_desc2(1, imageDesc2);
matcher2.add(train_desc2);
matcher2.train();

matcher2.knnMatch(imageDesc1, matchePoints, 2);
// Lowe's algorithm,获取优秀匹配点
for (int i = 0; i < matchePoints.size(); i++)
{
    if (matchePoints[i][0].distance < 0.4 * matchePoints[i][1].distance)
    {
        if (matches.find(make_pair(matchePoints[i][0].trainIdx, matchePoints[i][0].queryIdx)) == matches.end())
        {
            GoodMatchePoints.push_back(DMatch(matchePoints[i][0].trainIdx, matchePoints[i][0].queryIdx, matchePoints[i][0].distance));
        }

    }
}
cout<<"1->2 & 2->1 matches: " << GoodMatchePoints.size() << endl;
#endif

Finally, let's take a look at the splicing effect of opencv stitch~ Although the speed is relatively slow, the effect is still very good.

#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/stitching/stitcher.hpp>
using namespace std;
using namespace cv;
bool try_use_gpu = false;
vector<Mat> imgs;
string result_name = "dst1.jpg";
int main(int argc, char * argv[])
{
    Mat img1 = imread("34.jpg");
    Mat img2 = imread("35.jpg");

    imshow("p1", img1);
    imshow("p2", img2);

    if (img1.empty() || img2.empty())
    {
        cout << "Can't read image" << endl;
        return -1;
    }
    imgs.push_back(img1);
    imgs.push_back(img2);


    Stitcher stitcher = Stitcher::createDefault(try_use_gpu);
    // 使用stitch函数进行拼接
    Mat pano;
    Stitcher::Status status = stitcher.stitch(imgs, pano);
    if (status != Stitcher::OK)
    {
        cout << "Can't stitch images, error code = " << int(status) << endl;
        return -1;
    }
    imwrite(result_name, pano);
    Mat pano2 = pano.clone();
    // 显示源图像,和结果图像
    imshow("全景图像", pano);
    if (waitKey() == 27)
        return 0;
}

 

 The benefits of this article, free to receive Qt development learning materials package, technical video, including (Qt actual combat project, C++ language foundation, C++ design mode, introduction to Qt programming, QT signal and slot mechanism, QT interface development-image drawing, QT network, QT database programming, QT project combat, QSS, OpenCV, Quick module, interview questions, etc.) ↓↓↓↓↓↓See below↓↓Click on the bottom of the article to receive the fee↓↓

Guess you like

Origin blog.csdn.net/m0_73443478/article/details/132062355