小白的树莓派Tensorflow opencv 学习笔记(九)

K-最近邻匹配

所有机器学习算法中,KNN是最简单它也是在ORB框架下。但是它和之前ORB中的match的区别在于match返回最佳匹配,而KNN函数返回K个匹配,之后可以再用knnMatch进一步处理。
代码部分:

import numpy as np 
import cv2
import matplotlib.pyplot as plt

img1 = cv2.imread('football.jpg', cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread('shoot.jpg', cv2.IMREAD_GRAYSCALE)

orb = cv2.ORB_create()
kp1, des1 = orb.detectAndCompute(img1, None)
kp2, des2 = orb.detectAndCompute(img2, None)
bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.knnMatch(des1, des2, k=1)
img3 = cv2.drawMatchesKnn(img1, kp1, img2, kp2, matches, img2, flags=2)
plt.imshow(img3)
plt.show()

在这里插入图片描述

FLANN匹配

FLANN(近似最近邻的快速库)具有一种内部机制,可以根据数据本身选择最合适的算法来处理数据集
效果图:
在这里插入图片描述
代码:

import numpy as np 
import cv2
import matplotlib.pyplot as plt 

queryImage = cv2.imread("chess.jpg")
trainImage = cv2.imread("chess1.jpg")

# create SIFT and detect/compute
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(queryImage, None)
kp2, des2 = sift.detectAndCompute(trainImage, None)

# FLANN match
FLANN_INDEX_KDTREE = 0
indexParams = dict(algorithm=FLANN_INDEX_KDTREE, tree=5)
searchParams = dict(checks=50)

flann = cv2.FlannBasedMatcher(indexParams, searchParams)

matches = flann.knnMatch(des1, des2, k=2)

# prepare an empty mask to draw good matches
matchesMask = [[0, 0]for i in range(len(matches))] 

for i, (m,n) in enumerate(matches):
    if m.distance < 0.5*n.distance:  #0.5为灵敏度,越小舍弃的点越多
        matchesMask[i] = [1,0]

drawParams = dict(matchColor=(0,0,255), singlePointColor=(255,0,0), 
matchesMask=matchesMask, flags=0)

resultImg = cv2.drawMatchesKnn(queryImage, kp1, trainImage, kp2, matches, None, **drawParams)
plt.imshow(resultImg)
plt.show()

FLANN单应性匹配

单应性表示当两幅图中的一幅出现投影畸变时,他们还能够匹配。
相对于普通FLANN而言,其修改的部分主要在match部分
效果图:
在这里插入图片描述
代码:

import numpy as np 
import cv2
import matplotlib.pyplot as plt 

MIN_MATCH_COUNT = 10

img1 = cv2.imread("chess1.jpg",cv2.IMREAD_GRAYSCALE)
img2 = cv2.imread("chess.jpg",cv2.IMREAD_GRAYSCALE)

# create SIFT and detect/compute
sift = cv2.xfeatures2d.SIFT_create()
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)

# FLANN match
FLANN_INDEX_KDTREE = 0
indexParams = dict(algorithm=FLANN_INDEX_KDTREE, tree=5)
searchParams = dict(checks=50)

flann = cv2.FlannBasedMatcher(indexParams, searchParams)

matches = flann.knnMatch(des1, des2, k=2)

# record all good matches
good = []
for i, (m,n) in enumerate(matches):
    if m.distance < 0.5*n.distance:
        good.append(m)

if len(good)>MIN_MATCH_COUNT: # 良性匹配多于阈值
    # 在原始图像里寻找关键点
    SRC_PTS = np.float32([kp1[m.queryIdx].pt for m in good]).reshape(-1, 1, 2) 
    # 在训练图像里寻找关键点
    DST_PTS = np.float32([kp2[m.trainIdx].pt for m in good]).reshape(-1, 1, 2)
    
    M, mask = cv2.findHomography(SRC_PTS, DST_PTS, cv2.RANSAC, 5.0)
    matchmask = mask.ravel().tolist()
    # 绘制边框
    h,w = img1.shape
    pts = np.float32([[0, 0],[0, h-1], [w-1, h-1], [w-1, 0]]).reshape(-1, 1, 2)
    dst = cv2.perspectiveTransform(pts, M)
    print(dst)
    img2 = cv2.polylines(img2, [np.int32(dst)], True, (0,255,0), 3, cv2.LINE_AA)

else:
    print("No match")
    matchmask = None

drawParams = dict(matchColor=(0,0,255), singlePointColor=(255,0,0), 
matchesMask=matchmask, flags=2)

resultImg = cv2.drawMatches(img1, kp1, img2, kp2, good, None, **drawParams)
plt.imshow(resultImg)
plt.show()

特征检测了解之后就可以进入目标检测与识别篇章了

发布了25 篇原创文章 · 获赞 2 · 访问量 2102

猜你喜欢

转载自blog.csdn.net/weixin_43874764/article/details/104349288
今日推荐