Using Python to realize image stitching

Using python language to achieve multiple image stitching

Mainly refer to the above blog post, just make some supplements to solve the small problems encountered in the reproduction process.

Stitching procedure:

from pylab import *
from numpy import *
from PIL import Image

# If you have PCV installed, these imports should work
from PCV.geometry import homography, warp
from PCV.localdescriptors import sift

"""
This is the panorama example from section 3.3.
"""


featname = ['D:/pythonCode/test/data/testimages/' + str(i + 1) + '.sift' for i in range(5)]   //需要根据自己的图像地址和图像数量修改地址和循环次数
imname = ['D:/pythonCode/test/data/testimages/' + str(i + 1) + '.jpg' for i in range(5)]
# extract features and m
# match
l = {}
d = {}
for i in range(5):    //循环次数=图像数量
    sift.process_image(imname[i], featname[i])
    l[i], d[i] = sift.read_features_from_file(featname[i])

matches = {}
for i in range(4):    //循环次数=图像数量-1
    matches[i] = sift.match(d[i + 1], d[i])

# visualize the matches (Figure 3-11 in the book)
for i in range(4):    //循环次数=图像数量-1
    im1 = array(Image.open(imname[i]))
    im2 = array(Image.open(imname[i + 1]))
    figure()
    sift.plot_matches(im2, im1, l[i + 1], l[i], matches[i], show_below=True)


# function to convert the matches to hom. points
# 将匹配转换成齐次坐标点的函数
def convert_points(j):
    ndx = matches[j].nonzero()[0]
    fp = homography.make_homog(l[j + 1][ndx, :2].T)
    ndx2 = [int(matches[j][i]) for i in ndx]
    tp = homography.make_homog(l[j][ndx2, :2].T)

    # switch x and y - TODO this should move elsewhere
    fp = vstack([fp[1], fp[0], fp[2]])
    tp = vstack([tp[1], tp[0], tp[2]])
    return fp, tp


# estimate the homographies
# 估计单应性矩阵
model = homography.RansacModel()

fp, tp = convert_points(1)
H_12 = homography.H_from_ransac(fp, tp, model)[0]  # im 1 to 2

fp, tp = convert_points(0)
H_01 = homography.H_from_ransac(fp, tp, model)[0]  # im 0 to 1

tp, fp = convert_points(2)  # NB: reverse order
H_32 = homography.H_from_ransac(fp, tp, model)[0]  # im 3 to 2

tp, fp = convert_points(3)  # NB: reverse order
H_43 = homography.H_from_ransac(fp, tp, model)[0]  # im 4 to 3

# 扭曲图像
delta = 100  # 用于填充和平移 for padding and translation

im1 = array(Image.open(imname[1]), "uint8")
im2 = array(Image.open(imname[2]), "uint8")
im_12 = warp.panorama(H_12, im1, im2, delta, delta)

im1 = array(Image.open(imname[0]), "f")
im_02 = warp.panorama(dot(H_12, H_01), im1, im_12, delta, delta)

im1 = array(Image.open(imname[3]), "f")
im_32 = warp.panorama(H_32, im1, im_02, delta, delta)

im1 = array(Image.open(imname[4]), "f")
im_42 = warp.panorama(dot(H_32, H_43), im1, im_32, delta, 2 * delta)

figure()
imshow(array(im_42, "uint8"))
axis('off')
show()

First understand the relevant theory of splicing, copy the code to Python to reproduce, and solve the problems encountered one by one

1. Need to install PCV package

Refer to the hand-in-hand solution to solve the Python installation PCV

2. Read image in the code, no .sift file

imname is the original image we want to splice

featname is a sift file, which needs to be generated according to the original image

specific code reference

# -*- coding: utf-8 -*-
from PIL import Image
from pylab import *
from PCV.localdescriptors import sift
from PCV.localdescriptors import harris
 
# 添加中文字体支持
from matplotlib.font_manager import FontProperties
font = FontProperties(fname=r"c:/windows/fonts/SimSun.ttc", size=14)
 
imname = 'D:/ComputerVision_code/img/sdl11.jpg'   //需要拼接的图像的位置
im = array(Image.open(imname).convert('L'))
sift.process_image(imname, 'empire.sift')        //生成名称为empire的sift文件        
l1, d1 = sift.read_features_from_file('empire.sift')
 
figure()
gray()
subplot(121)
sift.plot_features(im, l1, circle=False)
title(u'SIFT特征',fontproperties=font)
 
# 检测harris角点
harrisim = harris.compute_harris_response(im)
 
subplot(122)
filtered_coords = harris.get_harris_points(harrisim, 6, 0.1)
imshow(im)
plot([p[1] for p in filtered_coords], [p[0] for p in filtered_coords], '*')
axis('off')
title(u'Harris角点',fontproperties=font)
 
show()

Error: empire.sift not found

Reason: The process_image function in sift.py under the PCV folder has not been modified.

Solution: Download the VLFeat0.9.20 version, copy the three related files to your own project environment, and then modify the opening address of the sift.py file in Anaconda.

Main reference: [python] OSError: sift not found problem solving_Lin-CT's Blog-CSDN Blog

The blogger talked about it in great detail, so I won’t add anything to it.

3. The sift file is successfully generated, and the image stitching program can continue to run, but the error did not meet fit acceptance criteria is reported

Reason: picture problem, the input picture is too dark or too blurry, so the system cannot perform feature extraction or too few feature points can not be matched (sometimes this happens when the picture is normal, anyway, just change the picture to solve this problem ).

Main reference: Python computer vision common errors and solutions (continuously updated)_Lin-CT's Blog-CSDN Blog

Personal feeling: Using the above method for image stitching requires more and more obvious image feature points to achieve a good stitching effect, otherwise the program cannot run due to too few feature points, or the stitching effect is poor.

My image has few feature points, and the displacement information is basically known. I plan to make a puzzle through translation and superposition. I am thinking whether to use PS directly (really fragrant), or find a new image processing method [I can’t laugh or cry]

Take a look at my beautiful stitching results.

Attach two more reference blog posts:

SIFT feature extraction + matching_Deer dolphin's blog-CSDN blog_sift feature extraction and matching

Experiment 4 - Image Stitching_cyh_first's Blog-CSDN Blog_Image Stitching Test Picture

OpenCV image processing--common image stitching methods_C Jun Moxiao's Blog-CSDN Blog_opencv image stitching

Thank you for your help in the process of solving the problem. If my answer helps you, please give me a thumbs up~~

Guess you like

Origin blog.csdn.net/MOZHOUH/article/details/124948274