OpenCV45: Epipolar Geometry|Epipolar Geometry

Target

In this section, you will learn

  • Basics of Multiple View Geometry
  • Understand what are epipoles, epipolar lines, epipolar constraints, etc.

basic concept

When an image is taken with a pinhole camera, some important information is lost, namely image depth, or the distance between each point in the image

machine as far as it is 3D to 2D conversion . Therefore, whether depth information can be found using these cameras is an important question. The answer is to use more than one camera. Using two cameras (two eyes), similar to how our eyes work, is called stereopsis . Therefore, OpenCV offers something relevant in this field.

Before diving into images, first understand some basic concepts in multi-view geometry. In this section, epipolar geometry will be discussed . The image below shows a basic setup for capturing images of the same scene with two cameras.

epipolar.jpg

If you only use the left camera, you cannot find the 3D point corresponding to the point in the image, because every point on the line projects to the same point on the image plane. But also think about the right image. now straight O X OX Different points on x x' ). So with these two images, the correct 3D points can be triangulated. That's the whole idea.

Projection of different points in the right plane O X OX Form a line on l l' ), call itthe Epiline. this means,

To find the point on the correct image, just search along the contour. It should be somewhere on this line (considered this way

Considering that, matching points can be found in other images without searching the whole image, just along the Epiline . therefore,

can provide better performance and accuracy), which is called the epipolar constraint (Epipolar Constraint) . Similarly, all points will have their corresponding Epiline in the other image , which is called the Epipolar Plane

O O and O O' is the camera center. From the setup given above, you can see the right image on the left image to see the right camera O O' , which are calledpoles. A pole is the intersection of a line passing through the camera center and the image plane. The same goes for the pole of the left camera. In some cases it will not be possible to find the poles in the image, they may be located outside the image (meaning that a camera cannot see

to another camera).

All epipolar lines pass through their poles . So, to find the position of the centerline, find many centerlines and find their intersection.

本节中,将重点放在寻找对极线和极线。但是在此之前,需要了解另外两个概念基础矩阵(Fundamental Matrix F)和本征矩阵(Essential Matrix E)基础矩阵包含有关平移和旋转的信息,这些信息在全局坐标中描述了第二个摄像头相对于第一个摄像头的位置。参见下图:

essential_matrix.jpg

现实中可能更喜欢在像素坐标中进行测量 ,基础矩阵除包含有关两个摄像头的内在信息之外,还包含与本征矩阵相同的信息,因此可以将两个摄像头的像素坐标关联起来。(如果使用的是校正后的图像,并用焦距除以标准化该点,F=E)。简而言之,基础矩阵F将一个图像中的点映射到另一图像中的线(上)。这是从两个图像的匹配点计算得出的,至少需要8个这样的点才能找到基本矩阵(使用8点算法时)。

代码

首先需要在两个图像之间找到尽可能多的匹配项以找到基础矩阵。为此,将SIFT描述符与基于FLANN的匹配器和比率测试结合使用。

import cv2
import numpy as np
from matplotlib import pyplot as plt

img1 = cv2.imread('left.jpg', 0)
img2 = cv2.imread('right.jpg', 0)

sift = cv2.xfeatures2d.SIFT_create()

# find the keypoints and descriptors with SIFT
kp1, des1 = sift.detectAndCompute(img1, None)
kp2, des2 = sift.detectAndCompute(img2, None)

# FLANN parameters
FLANN_INDEX_KDTREE = 1

index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)

flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)


good = []
pts1 = []
pts2 = []

# ratio test as per low;s paper
for i, (m, n) in enumerate(matches):
    if m.distance < 0.8 * n.distance:
        good.append(m)
        pts2.append(kp2[m.trainIdx].pt)
        pts1.append(kp1[m.queryIdx].pt)
复制代码

现在,获得了两张图片的最佳匹配列表,基于此找到基础矩阵

pts1 = np.int32(pts1)
pts2 = np.int32(pts2)
F, mask = cv2.findFundamentalMat(pts1, pts2, cv2.FM_LMEDS)

# we select only inlier points
pts1 = pts1[mask.ravel() == 1]
pts2 = pts2[mask.ravel() == 1]
复制代码

接下来,找到对应的Epilines。在第二张图像上绘制与第一张图像中的点相对应的Epilines。因此,定义了一个新函数来在图像上绘制这些线条。

def drawlines(img1, img2, lines, pts1, pts2):
    '''img1 - image on which we draw the epilines for the points in img2
        lines - corresponding epilines '''
    r, c = img1.shape
    img1 = cv2.cvtColor(img1, cv2.COLOR_GRAY2BGR)
    img2 = cv2.cvtColor(img2, cv2.COLOR_GRAY2BGR)
    
    for r, pt1, pt2 in zip(lines, pts1, pts2):
        color = tuple(np.random.randint(0,255,3).tolist())
        x0,y0 = map(int, [0, -r[2]/r[1] ])
        x1,y1 = map(int, [c, -(r[2]+r[0]*c)/r[1]])
        img1 = cv2.line(img1, (x0,y0), (x1,y1), color, 1)
        img1 = cv2.circle(img1, tuple(pt1), 5, color, -1)
        img2 = cv2.circle(img2, tuple(pt2), 5, color, -1)
    return img1, img2
复制代码

现在,我们在两个图像中都找到了Epiline并将其绘制。

# Find epilines corresponding to points in right image (second image) and
# drawing its lines on left image
lines1 = cv2.computeCorrespondEpilines(pts2.reshape(-1, 1, 2), 2, F)
lines1 = lines1.reshape(-1, 3)
img5, img6 = drawlines(img1, img2, lines1, pts1, pts2)
# Find epilines corresponding to points in left image (first image) and
# drawing its lines on right image
lines2 = cv2.computeCorrespondEpilines(pts1.reshape(-1, 1, 2), 1, F)
lines2 = lines2.reshape(-1, 3)
img3, img4 = drawlines(img2, img1, lines2, pts2, pts1)
plt.subplot(121)
plt.imshow(img5)
plt.subplot(122)
plt.imshow(img3)
plt.show()
复制代码

以下是得到的结果:

insert image description here

可以在左侧图像中看到所有极点都收敛在右侧图像的外部,那个汇合点就是极点。 为了获得更好的结果,应使用具有良好分辨率和许多非平面点的图像。

基础矩阵估计对匹配质量、异常值等敏感。当所有选定的匹配位于同一平面上时,它会变得更糟

附加资源

Guess you like

Origin juejin.im/post/7229321258911825980