Table of contents
Brute-Force brute force matching
Random sampling consensus algorithm (Random sample consensus, RANSAC)
Practical exercise: image stitching method
Brute-Force brute force matching
The obtained eigenvectors are compared one by one, and the two eigenvectors that are closest to each other should be the most similar.
kp1, des1 = sift.detectAndCompute()
The function has two return values, the first return value is the coordinates of the feature points, and the second return value is the feature vector.- Parameters of cv2.BFMatcher() : The first parameter indicates the measurement distance used. The Euclidean distance is used here, which is the default value, and NORM_L2 is used by default to normalize the Euclidean distance of the array. The second parameter is a Boolean value, which defaults to Faulse. In this example, crossCheck is True, that is, the feature points in the two images must be unique to each other. For example, the i-th feature point in A and the j-th feature point in B The nearest feature point, and the j-th feature point in B to the i-th feature point in A is also
import cv2
import numpy as np
import matplotlib.pyplot as plt
img1=cv2.imread('E:/OpenCV/image/1shu.png',0)#灰度图
img2=cv2.imread('E:/OpenCV/image/2shu.png',0)#灰度图
def cv_show(name,img):
cv2.imshow('name',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv_show('img1',img1)
cv_show('img2',img2)
sift=cv2.xfeatures2d.SIFT_create()
kp1,des1=sift.detectAndCompute(img1,None)#检测关键点并计算特征向量(des)
kp2,des2=sift.detectAndCompute(img1,None)
#crossCheck表示两个特征点要互相匹,例如A中的第i个特征点与B中的第j个特征点最近的,并且B中的第j个特征点到A中的第i个特征点也是
#NORM_L2:归一化数组的(欧几里得距离),如果其他特征计算方法需要考虑不同的匹配计算方式。
bf=cv2.BFMatcher(crossCheck=True)#BF:蛮力匹配的缩写
1 to 1 match
- distance : Indicates the Euclidean distance between a pair of matching feature points. The smaller the value, the closer the two feature points are .
cv2.drawMatches(img1, kp1, img2, kp2, matches_10[:10], None, flags=2):
Connect the key points of the image.
matches=bf.match(des1,des2)
matches=sorted(matches,key=lambda x:x.distance)#排个序:最接近的、第二接近的、第三……
img3=cv2.drawMatches(img1,kp1,img2,kp2,matches[:10],None,flags=2)#把关键点连在一起
cv_show('img3',img3)
k best match
bf=cv2.BFMatcher()#特征匹配算法
matches=bf.knnMatch(des1,des2,k=2)#第一张图中的点对应第二张图中两个特征点
good=[]
for m,n in matches:
if m.distance<0.75*n.distance:#过滤:这里m, n分别表示两个特征点,如果两个特征点distance比值小于0.75,则保留该特征匹配点。
good.append([m])
img3=cv2.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=2)#对图像的关键点进行连线操作。
cv_show('img3',img3)
If you need to complete the operation faster, you can try to use cv2.FlannBasedMatcher
Random sampling consensus algorithm (Random sample consensus, RANSAC)
Select the initial sample points for fitting, given a tolerance range, and continue to iterate
After each fitting, there is a corresponding number of data points within the tolerance range, and finding the case with the largest number of data points is the final fitting result
homography matrix
- Projective transformations on images
- The last value is set to one, because it is easy to normalize
- 8 values need 8 equations, need four pairs of points, (x, y) can form two equations
- In order to prevent getting wrong points, you need to use RANSAC to filter first
Practical exercise: image stitching method
- Extracting image features requires key points (sift)
- Find the H matrix for a certain picture and get the corresponding result
- splice together
pycharm run code
ImageStiching.py
from Stitcher import Stitcher
import cv2
def resize(img):
height, width = img.shape[:2]
size = (int(width*0.4), int(height*0.4))
img_resize = cv2.resize(img, size, interpolation=cv2.INTER_AREA)
return img_resize
# 读取拼接图片
imageA = cv2.imread("bag_1.jpg")
imageB = cv2.imread("bag_2.jpg")
a = resize(imageA)
b = resize(imageB)
# 把图片拼接成全景图
stitcher = Stitcher()
(result, vis) = stitcher.stitch([a, b], showMatches=True)
# 显示所有图片
cv2.imshow("Image A", a)
cv2.imshow("Image B", b)
cv2.imshow("Keypoint Matches", vis)
cv2.imshow("Result", result)
cv2.waitKey(0)
cv2.destroyAllWindows()
Stitcher.py
import numpy as np
import cv2
class Stitcher:
# 拼接函数
def stitch(self, images, ratio=0.75, reprojThresh=4.0, showMatches=False):
# 获取输入图片
(imageB, imageA) = images
# 检测A、B图片的SIFT关键特征点,并计算特征描述子
(kpsA, featuresA) = self.detectAndDescribe(imageA)
(kpsB, featuresB) = self.detectAndDescribe(imageB)
print("kpsA, featuresA", (kpsA, featuresA))
# 匹配两张图片的所有特征点,返回匹配结果
M = self.matchKeypoints(kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh)
print("M", M)
# 如果返回结果为空,没有匹配成功的特征点,退出算法
if M is None:
return None
# 否则,提取匹配结果
# H是3x3视角变换矩阵
(matches, H, status) = M
# 将图片A进行视角变换,result是变换后图片
result = cv2.warpPerspective(imageA, H, (imageA.shape[1] + imageB.shape[1], imageA.shape[0]))
self.cv_show('result', result)
# 将图片B传入result图片最左端
result[0:imageB.shape[0], 0:imageB.shape[1]] = imageB
self.cv_show('result', result)
# 检测是否需要显示图片匹配
if showMatches:
# 生成匹配图片
vis = self.drawMatches(imageA, imageB, kpsA, kpsB, matches, status)
# 返回结果
return (result, vis)
# 返回匹配结果
return result
def cv_show(self, name, img):
cv2.imshow(name, img)
cv2.waitKey(0)
cv2.destroyAllWindows()
def detectAndDescribe(self, image):
# 将彩色图片转换成灰度图
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# 建立SIFT生成器
descriptor = cv2.xfeatures2d.SIFT_create()
# 检测SIFT特征点,并计算描述子
(kps, features) = descriptor.detectAndCompute(image, None)
# 将结果转换成NumPy数组,即用数组来表示特征点的坐标。
kps = np.float32([kp.pt for kp in kps])
# 返回特征点集,及对应的描述特征
return (kps, features)
def matchKeypoints(self, kpsA, kpsB, featuresA, featuresB, ratio, reprojThresh):
# 建立暴力匹配器
matcher = cv2.BFMatcher()
# 使用KNN检测来自A、B图的SIFT特征匹配对,K=2
rawMatches = matcher.knnMatch(featuresA, featuresB, 2)
matches = []
for m in rawMatches:
# 当最近距离跟次近距离的比值小于ratio值时,保留此匹配对
if len(m) == 2 and m[0].distance < m[1].distance * ratio:
# 存储两个点在featuresA, featuresB中的索引值
matches.append((m[0].trainIdx, m[0].queryIdx))
# 当筛选后的匹配对大于4时,计算视角变换矩阵
if len(matches) > 4:
# 获取匹配对的点坐标
ptsA = np.float32([kpsA[i] for (_, i) in matches])
ptsB = np.float32([kpsB[i] for (i, _) in matches])
# 计算视角变换矩阵
(H, status) = cv2.findHomography(ptsA, ptsB, cv2.RANSAC, reprojThresh)
# 返回结果
return (matches, H, status)
# 如果匹配对小于4时,返回None
return None
def drawMatches(self, imageA, imageB, kpsA, kpsB, matches, status):
# 初始化可视化图片,将A、B图左右连接到一起
(hA, wA) = imageA.shape[:2]
(hB, wB) = imageB.shape[:2]
vis = np.zeros((max(hA, hB), wA + wB, 3), dtype="uint8")
vis[0:hA, 0:wA] = imageA
vis[0:hB, wA:] = imageB
# 联合遍历,画出匹配对
for ((trainIdx, queryIdx), s) in zip(matches, status):
# 当点对匹配成功时,画到可视化图上
if s == 1:
# 画出匹配对
ptA = (int(kpsA[queryIdx][0]), int(kpsA[queryIdx][1]))
ptB = (int(kpsB[trainIdx][0]) + wA, int(kpsB[trainIdx][1]))
cv2.line(vis, ptA, ptB, (0, 234, 0), 1)
# 返回可视化结果
return vis
The specific interpretation steps can be seen: panorama stitching feature matching with code_shuyeah's blog-CSDN blog_code for matching map points with panorama
specific effect
Error 1 occurred
Custom .py file imports Module, reports ModuleNotFoundError: No module named
I get an error when running the following code
from Stitcher import Stitcher
import
The error is as follows :
ModuleNotFoundError: No module named
Solution :
Normally, when a module is imported using the import statement, Python will search for the specified module file in the following order:
- Search in the current directory, that is, the directory where the currently executing program file is located;
- Search in each directory under PYTHONPATH (environment variable);
- Find it in the default installation directory of Python.
There are 3 ways to solve "Python cannot find the specified module", namely:
- Temporarily add the full path to the module file storage location to sys.path;
- Put the module in the module load path already included in the sys.path variable;
- Set the path system environment variable.
Specific methods: 3 ways to import modules in Python (super detailed)
Error 2 occurs
Reason analysis : NoneType may not read the picture, you can check whether the path to read the picture is correct