Computer Vision----Image Stitching

 1. Introduction

Image Stitching is a technology that uses real-life images to form a panoramic space. It stitches multiple images into a large-scale image or a 360-degree panorama. Stitching can be regarded as a special case of scene reconstruction, in which the image Correlation is only via planar homography. Image stitching has great applications in machine vision fields such as motion detection and tracking, augmented reality, resolution enhancement, video compression, and image stabilization. The output of image stitching is the union of the two input images. Four steps are usually used: (1) Feature Extraction: Detect feature points in the input image. (2) Image Registration: The geometric correspondence between images is established so that they can be transformed, compared and analyzed in a common reference system. (3) Image warping (Warping): Image warping refers to reprojecting the image of one of the images and placing the image on a larger canvas. (4) Image fusion (Blending): Image fusion is to achieve smooth transition between images by changing the gray level of the image near the boundary, removing these gaps, and creating a mixed image. Blend modes are used to blend two layers together.

2. Implementation method

(1) Use SIFT to extract feature points in the image, and calculate feature vectors for the area around each key point. You can use the SURF method, which is faster than SIFT, but my opencv version is the latest version. I don’t know if it is due to patents or some other reason. When instantiating with SURF = cv2.xfeatures2D.SURF_create (), an error will be reported. It is said online that you can return the opencv version. , but I didn’t try it here, so I used sift = cv2.SIFT_create().
(2) After extracting the key points and feature vectors of the two pictures respectively, they can be used to match the two pictures. In splicing pictures, you can use Knn for matching, but it is faster to use the FLANN fast matching library. For picture splicing, you need to use FLANN's homography matching.
(3) After the homography is matched, the perspective transformation H matrix can be obtained. Use this inverse matrix to perform perspective transformation on the second picture, and transfer it to the same perspective as the first picture to prepare for the next step of splicing. .
(4) After the perspective change is completed, the pictures can be spliced ​​directly. The picture is directly added to the left side of the image after the perspective change is completed through numpy, covering the overlapping parts to obtain the spliced ​​picture. However, there will be an obvious line in the middle of the spliced ​​picture. For gaps, you can use the weighted average method to blend the gaps with a certain ratio on both sides of the boundary. This is fast but unnatural. Either the feathering method or the Laplacian pyramid fusion works best. The weighted average method is used here. You can stack the first picture on the left, but do some weighting processing on the first picture and its overlapping area. The overlapping part is closer to the left picture, and the left picture has a higher weight. Some, closer to the right, the weight of the rotation image on the right is higher, and then the two are added together to make the transition smooth, which looks better but is slower.

3. Experimental pictures

 

 

4. Experiment

4.1 Direct splicing

code show as below:

#导入库
import cv2
import numpy as np
import sys
from PIL import Image
#图像显示函数
def show(name,img):
    cv2.imshow(name, img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()
#读取输入图片
ima = cv2.imread("p2.jpg")
imb = cv2.imread("p1.jpg")
A = ima.copy()
B = imb.copy()
imageA = cv2.resize(A,(0,0),fx=0.2,fy=0.2)
imageB = cv2.resize(B,(0,0),fx=0.2,fy=0.2)
#检测A、B图片的SIFT关键特征点,并计算特征描述子
def detectAndDescribe(image):
    # 建立SIFT生成器
    sift = cv2.SIFT_create()
    # 检测SIFT特征点,并计算描述子
    (kps, features) = sift.detectAndCompute(image, None)
    # 将结果转换成NumPy数组
    kps = np.float32([kp.pt for kp in kps])
    # 返回特征点集,及对应的描述特征
    return (kps, features)

#检测A、B图片的SIFT关键特征点,并计算特征描述子
kpsA, featuresA = detectAndDescribe(imageA)
kpsB, featuresB = detectAndDescribe(imageB)
# 建立暴力匹配器
bf = cv2.BFMatcher()
# 使用KNN检测来自A、B图的SIFT特征匹配对,K=2
matches = bf.knnMatch(featuresA, featuresB, 2)
good = []
for m in matches:
    # 当最近距离跟次近距离的比值小于ratio值时,保留此匹配对
    if len(m) == 2 and m[0].distance < m[1].distance * 0.75:
        # 存储两个点在featuresA, featuresB中的索引值
        good.append((m[0].trainIdx, m[0].queryIdx))

# 当筛选后的匹配对大于4时,计算视角变换矩阵
if len(good) > 4:
    # 获取匹配对的点坐标
    ptsA = np.float32([kpsA[i] for (_, i) in good])
    ptsB = np.float32([kpsB[i] for (i, _) in good])
    # 计算视角变换矩阵
    H, status = cv2.findHomography(ptsA, ptsB, cv2.RANSAC,4.0)

# 匹配两张图片的所有特征点,返回匹配结果
M = (matches, H, status)
# 如果返回结果为空,没有匹配成功的特征点,退出程序
if M is None:
    print("无匹配结果")
    sys.exit()
# 否则,提取匹配结果
# H是3x3视角变换矩阵
(matches, H, status) = M
# 将图片A进行视角变换,result是变换后图片
result = cv2.warpPerspective(imageA, H, (imageA.shape[1] + imageB.shape[1], imageA.shape[0]))
# 将图片B传入result图片最左端
result[0:imageB.shape[0], 0:imageB.shape[1]] = imageB
show('res',result)
print(result.shape)

The results obtained directly are as follows:


We can see that there are obvious gaps in the stitched images:

4.2 Perform Multi-band Blending to deal with gaps

code show as below:

import cv2
import numpy as np
from matplotlib import pyplot as plt
import time

def show(name,img):
    cv2.imshow(name, img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

MIN = 10
FLANN_INDEX_KDTREE = 0
starttime = time.time()
img1 = cv2.imread(r'D:\software\pycharm\PycharmProjects\computer-version\data\p1.jpg') #query
img2 = cv2.imread(r'D:\software\pycharm\PycharmProjects\computer-version\data\p2.jpg') #train
imageA = cv2.resize(img1,(0,0),fx=0.2,fy=0.2)
imageB = cv2.resize(img2,(0,0),fx=0.2,fy=0.2)
surf=cv2.xfeatures2d.SIFT_create()#可以改为SIFT
sift = cv2.SIFT_create()
kp1,descrip1 = sift.detectAndCompute(imageA,None)
kp2,descrip2 = sift.detectAndCompute(imageB,None)
#创建字典
indexParams = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
searchParams = dict(checks=50)
flann=cv2.FlannBasedMatcher(indexParams,searchParams)
match=flann.knnMatch(descrip1,descrip2,k=2)
good=[]
#过滤特征点
for i,(m,n) in enumerate(match):
    if(m.distance<0.75*n.distance):
        good.append(m)

# 当筛选后的匹配对大于10时,计算视角变换矩阵
if len(good) > MIN:
    src_pts = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1,1,2)
    ano_pts = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1,1,2)
    M,mask = cv2.findHomography(src_pts,ano_pts,cv2.RANSAC,5.0)
    warpImg = cv2.warpPerspective(imageB, np.linalg.inv(M), (imageA.shape[1]+imageB.shape[1], imageB.shape[0]))
    direct=warpImg.copy()
    direct[0:imageA.shape[0], 0:imageB.shape[1]] =imageA
    simple=time.time()

show('res',warpImg)
rows,cols=imageA.shape[:2]
print(rows)
print(cols)
for col in range(0,cols):
    # 开始重叠的最左端
    if imageA[:, col].any() and warpImg[:, col].any():
        left = col
        print(left)
        break

for col in range(cols-1, 0, -1):
    #重叠的最右一列
    if imageA[:, col].any() and warpImg[:, col].any():
        right = col
        print(right)
        break

# Multi-band Blending算法
levels = 6
gaussian = cv2.getGaussianKernel(5, 0)
gaussian_pyramid_imageA = [imageA]
gaussian_pyramid_imageB = [warpImg]
laplacian_pyramid_imageA = [imageA]
laplacian_pyramid_imageB = [warpImg]

for i in range(levels):
    gaussian_imageA = cv2.pyrDown(gaussian_pyramid_imageA[i])
    gaussian_imageB = cv2.pyrDown(gaussian_pyramid_imageB[i])
    gaussian_pyramid_imageA.append(gaussian_imageA)
    gaussian_pyramid_imageB.append(gaussian_imageB)

for i in range(levels, 0, -1):
    laplacian_imageA = cv2.subtract(gaussian_pyramid_imageA[i-1], cv2.pyrUp(gaussian_pyramid_imageA[i], dstsize=gaussian_pyramid_imageA[i-1].shape[:2]))
    laplacian_imageB = cv2.subtract(gaussian_pyramid_imageB[i-1], cv2.pyrUp(gaussian_pyramid_imageB[i], dstsize=gaussian_pyramid_imageB[i-1].shape[:2]))
    laplacian_pyramid_imageA.append(laplacian_imageA)
    laplacian_pyramid_imageB.append(laplacian_imageB)

gaussian_pyramid_mask = [np.ones((imageA.shape[0]//(2**levels), imageA.shape[1]//(2**levels)), np.float32)]
for i in range(levels):
    gaussian_mask = cv2.pyrDown(gaussian_pyramid_mask[i])
    gaussian_pyramid_mask.append(gaussian_mask)

laplacian_pyramid = []
n = 0
for laplacian_imageA, laplacian_imageB, gaussian_mask in zip(laplacian_pyramid_imageA, laplacian_pyramid_imageB, gaussian_pyramid_mask[::-1]):
    rows, cols, dpt = laplacian_imageA.shape
    n += 1
    laplacian = np.zeros((rows, cols, dpt), np.float32)
    for row in range(rows):
        for col in range(cols):
            if gaussian_mask[row, col] == 1:
                laplacian[row, col] = laplacian_imageA[row, col]
            else:
                laplacian[row, col] = laplacian_imageB[row, col]
    laplacian_pyramid.append(laplacian)

#重建图像
image_reconstruct = laplacian_pyramid[0]
for i in range(1, levels):
    image_reconstruct = cv2.pyrUp(image_reconstruct, dstsize=laplacian_pyramid[i].shape[:2])
    image_reconstruct = cv2.add(image_reconstruct, laplacian_pyramid[i])

for row in range(0, imageA.shape[0]):
    for col in range(0, left):
        if image_reconstruct[row, col].all() == 0:
            image_reconstruct[row, col] = imageA[row, col]

cv2.imshow('result', image_reconstruct)
cv2.waitKey(0)
cv2.destroyAllWindows()

The result is as follows:

 

 Analysis: It can be seen from the figure that after using Multi-band Blending for image smoothing, the gaps at the splicing have been improved. But for some reason, the image on the left is black. After consulting the information, I found that if there is a black part next to the picture, it is because the picture cannot be filled completely. You can try changing the angle when shooting. You can try blending to achieve better results. But there was no solution after testing.

Guess you like

Origin blog.csdn.net/qq_44896301/article/details/130490819