[OpenCV] Chapter 25: Image search, image stitching

Chapter 25: Image search, image stitching

When we find the feature points of the two images and match them, we can perform image search, image splicing, etc.
Image search is to find where image A is in image B. Once found, it can be replaced directly to achieve the effect of cutting out the image.
Image splicing is to splice together the matched feature points of two images to form a panoramic image.
This chapter uses two cases to illustrate the implementation process.

However, both image search and image splicing require the use ofhomography matrix, so this matrix must be explained in advance. In fact, this matrix is ​​the perspective transformation matrix in our geometric transformation in Chapter 5. It is recommended to go to Chapter 5 to take a closer look at perspective transformation.

  • Intuitively, let’s understand what a homography matrix is.
    Look at the picture below. Suppose there are two cameras now, which are the two points on the upper left and upper right of the picture below. From these two Take a photo of the image below. First look at the X point. The X point captured by the left camera corresponds to the x point of image1, and the X point captured by the right camera corresponds to the x' point of image2. Therefore, for a point on the same plane, if the angles of the two cameras are different, the position of the point mapped on the photo will be different.
    Then how to match the corresponding points one by one? Using the homography matrix, H in the figure below is the homography matrix. The operation of the point on image1 and H can obtain the position of the corresponding point on image2. Similarly, the operation of image1 and H can obtain the real-scene position of the corresponding point, and the operation of image2 and H can also obtain the real-scene position of the corresponding point.

    Note that the real scene I am talking about here is a two-dimensional plane, and this only involves the mapping from a two-dimensional plane to a two-dimensional plane. If the real scene captured is three-dimensional, it is not such a simple one-to-one correspondence. There will also be some details such as the center point of the camera and the depth of the three-dimensional real scene. This part of the knowledge is included in graphics and will not be discussed here.

  • What is the role of homography matrix?
    1. Adjust the image angle
    As shown in the picture below, when we take a picture of a credit card, we take it diagonally. To extract the card number on the credit card, then You have to straighten the diagonal photo first so that you can extract the numbers well. The homography matrix can play a positive role.

    2. Partial replacement
    The picture on the left is a billboard. We want to change the content inside the billboard. For example, in the picture on the right, let the content of the billboard become the one on the right. picture. At this time, we need to use the homography matrix. Through mapping transformation, we can put our favorite advertisements into the billboards, without having to cut out the images manually.

  • API
    In opencv, there is a special API for obtaining the homography matrix. We only need to pass the original image image1 and the destination image image2 as parameters into the API. Get this matrix.
    H,_ = cv2.findHomography(scrPts, dstPts, cv2.RANSAC, 5.0)
    srcPts = np.float32([kp_template[m.queryIdx]. pt for m in good]).reshape(-1,1,2)
    dstPts = np.float32([kp_orig[m.trainIdx].pt for m in good]).reshape( -1,1,2)
    good is a list. The elements in the list are the matches object after flann matching. This object is also the element in good, including three attributes, queryIdx and trainIdx. , distance three attributes
    We first take out the objects in good one by one, which is m
    and then look for the queryIdx attribute of m. What is returned is the template image. The index value of the feature point, and then we use this index value to index the feature point object of the template image kp_template[m.queryIdx]
    The feature point object of the template image has a .pt attribute, and the feature point is returned At the x, y coordinates of the template image
    Then, we reshape (-1, 1, 2) the x, y coordinates to obtain the set of feature points of the template image, and treat this set as Pass the parameter scrPts into the homography matrix generation api
    dstPts in the same way.

  • #例25.1 图像查找案例  
    import cv2
    import numpy as np
    import matplotlib.pyplot as plt
    
    img_template = cv2.imread(r'C:\Users\25584\Desktop\opencv_search.png')   #模板  (120, 70, 3)
    img_template_gray = cv2.cvtColor(img_template, cv2.COLOR_BGR2GRAY)  
    img_orig = cv2.imread(r'C:\Users\25584\Desktop\opencv_orig.png')  #原图  (600, 868, 3)
    img_orig_gray = cv2.cvtColor(img_orig, cv2.COLOR_BGR2GRAY)  
    
    sift = cv2.xfeatures2d.SIFT_create()    #创建特征检测器SIFT
    kp_template, des_template = sift.detectAndCompute(img_template_gray, None)        #进行检测
    kp_orig, des_orig = sift.detectAndCompute(img_orig_gray, None)    
    
    flann = cv2.FlannBasedMatcher(dict(algorithm=1, trees=5), dict(checks=50))       #创建匹配器flann
    matchs = flann.knnMatch(des_template, des_orig, 2)  #调用knnMatch方法进行匹配,这里参数k设为2,
    
    good = []   #对匹配的点进行一个过滤
    for i, (m,n) in enumerate(matchs):   
        if m.distance < 0.7 * n.distance:
            good.append(m)
    
    
    #------通过获取的匹配点,去查找单应性矩阵,然后通过单应性矩阵的透视变换找到想找的图像------------
    if len(good) >= 4:
        srcPts = np.float32([kp_template[m.queryIdx].pt for m in good]).reshape(-1,1,2)
        dstPts = np.float32([kp_orig[m.trainIdx].pt for m in good]).reshape(-1,1,2)
        
        H, _ = cv2.findHomography(srcPts, dstPts, cv2.RANSAC, 5.0)
        h,w = img_template.shape[:2]
        pts = np.float32([[0,0],[0,h-1], [w-1,h-1], [w-1,0]]).reshape(-1,1,2)
        dst = cv2.perspectiveTransform(pts, H)
        img = img_orig.copy()    
        cv2.polylines(img, [np.int32(dst)], True, (0,0,255))
    
    else:
        print('the number of good is less than 4.')
        exit()
    
    img_matchs = cv2.drawMatchesKnn(img_template, kp_template, img_orig, kp_orig, [good], outImg=img) 
    
    #---------------可视化----------------------------------
    Fig=plt.figure(figsize=(16,14))
    Grid=plt.GridSpec(3,8)
    axes1=Fig.add_subplot(Grid[1,0]), plt.imshow(img_template[:,:,::-1]), plt.box(), plt.xticks([]), plt.yticks([]), plt.title('template')
    axes2=Fig.add_subplot(Grid[0:4,1:3]), plt.imshow(img_orig[:,:,::-1]), plt.box(), plt.xticks([]), plt.yticks([]), plt.title('orig img')
    axes3=Fig.add_subplot(Grid[0:4,3:6]), plt.imshow(img_matchs[:,:,::-1]), plt.box(), plt.xticks([]), plt.yticks([]), plt.title('match img')
    axes4=Fig.add_subplot(Grid[0:4,6:8]), plt.imshow(img[:,:,::-1]), plt.box(), plt.xticks([]), plt.yticks([]), plt.title('search img')

    ((<DMatch 0000022FEA3EAFD0>, <DMatch 0000022FEA8B0210>),
     (<DMatch 0000022FEAA37530>, <DMatch 0000022FEAA37430>),
     (<DMatch 0000022FEAA371B0>, <DMatch 0000022FEAA373D0>),
     (<DMatch 0000022FEAA4CD50>, <DMatch 0000022FEAA4CDD0>),
     (<DMatch 0000022FEAA4CE50>, <DMatch 0000022FEAA4CDB0>),
     (<DMatch 0000022FEAA4CD10>, <DMatch 0000022FEAA4CD30>),
     (<DMatch 0000022FEAA4C470>, <DMatch 0000022FEAA4CE10>),
     (<DMatch 0000022FEAA4CFB0>, <DMatch 0000022FEAA4CF90>),
     (<DMatch 0000022FEAA4CCF0>, <DMatch 0000022FEAA4CE30>),
     (<DMatch 0000022FEAA4CED0>, <DMatch 0000022FEAA4CD70>),
     (<DMatch 0000022FEAA4CDF0>, <DMatch 0000022FEAA4CF70>),
     (<DMatch 0000022FEAA4CF30>, <DMatch 0000022FEAA4CEF0>),
     (<DMatch 0000022FEAA4CE90>, <DMatch 0000022FEAA4CEB0>),
     (<DMatch 0000022FEAA4CF10>, <DMatch 0000022FEAA4CE70>),
     (<DMatch 0000022FEAA4CF50>, <DMatch 0000022FEA423030>),
     (<DMatch 0000022FEA423050>, <DMatch 0000022FEA423070>),
     (<DMatch 0000022FEA423090>, <DMatch 0000022FEA4230B0>),
     (<DMatch 0000022FEA4230D0>, <DMatch 0000022FEA4230F0>))

Guess you like

Origin blog.csdn.net/friday1203/article/details/134847212