Optical center offset correction based on plane assumption in fisheye camera

This article is mainly aboutHugin open source software<libpano< a Principle analysis and summary of functions such as plane_transfer_to_camera() and plane_transfer_from_camera() in i=4>/math.c>.

       In the article "Panorama Correction of Fisheye Image", we have introduced in detail how to map a fisheye image To the corresponding 360-degree panoramic image, the basic model is shown in Figure 1. Three-dimensional coordinates O X Y Z {OXYZ} OXYZ Indication is where we are Instant physical space, inside Y Y The Y axis points in the direction we are facing, Z Z Z 轴oriented sky, X X The X axis points to our right hand side. And take the coordinate origin O O O The spherical surface with the center of the sphere can be regarded as the fisheye lens used for photography, O O The O point is the optical center of the lens. Expanding the sphere into a plane is the panoramic image we need. The following two-dimensional plane O ′ X f Y f O'X_fY_f OXfANDf is the imaging plane of the camera. When shooting, start from P P The ray starting from the point P follows the straight line O P OP OP reach the light O O O point, and refraction occurs. The refraction characteristics are determined by the design model and manufacturing process of the lens. The linear model is usually used, that is, the refraction angle is linearly proportional to the incident angle. Finally, the refracted light ray follows the straight line O P 1 OP_1 OP1 reaches the imaging plane, because the image on the imaging plane is 180 degrees upside down at this time. For convenience, we usually use the point P 1 P_1 P1 Point symmetric about the origin P 2 P_2 P2. All these points P 2 P_2 P2 constitutes the fisheye image we obtained. The main task of correcting the fisheye image is to correct the spherical P P P The point coordinates are calculated as the corresponding coordinates on the fisheye image P 2 P_2 P2, thereby obtaining the point P P by interpolationThe pixel value of P.

Figure 1 Schematic diagram of fisheye projection

       As can be seen from Figure 1, the mapping from the spherical surface to the imaging plane has certain limitations, that is, in reality we cannot make a lens with a complete spherical surface, so that we cannot obtain a complete 360 ​​degrees in one shot. perspective. However, it is not difficult to make a fisheye lens with an angle of view greater than 180 degrees. If we use such a lens, let it face Y Y Take an image in the positive direction of the Y axis, and then move around Z Z Z Rotation 180 degrees, arrival in the morning Y Y Y Take another image in the negative direction of the axis. In theory, we can obtain an image with a 360-degree viewing angle covering the entire spherical surface. This is what multi-eye fisheye image stitching does. But this method also has a fatal flaw, that is, we cannot capture panoramic images of sports scenes, because the images from different perspectives in this method come from different moments. At this time, we can easily think of using two fisheye cameras with the same parameters, facing Y Y Y is installed in the positive and negative directions of the axis. After synchronization, it seems that the content of the 360-degree view can be obtained at the same time. But the problem arises again, because the optical centers of the two fisheye lenses are not at the same physical space point. We call it the optical center offset. That is, there are actually two spheres at different positions in Figure 1. Somewhere in the space When two points reach the two spheres along the straight line passing through the center of the sphere, their longitude and latitude coordinates relative to the centers of the two spheres are different. Therefore, we cannot directly copy the content on one sphere to another, otherwise there will be a gap at the interface of the images, which is the so-called parallax. This problem is especially obvious in close-up shooting. It is often difficult to perfectly solve the problem of parallax, and in most cases even impossible, because the mapping from a three-dimensional physical space to a two-dimensional image plane itself is a process of information loss. Therefore, we can only solve some specific problems. For example, suppose what we are shooting is all on a flat surface?

Figure 2 Schematic diagram of optical center offset correction

       Figure 2 shows the process of optical center offset correction of the fisheye lens based on the plane assumption, mainly referring to the open source software Hugin The method used, and a rough summary of its principles by reading the code. For ease of understanding, this is simplified to a two-dimensional case, and the principle is also applicable to three-dimensional space. Its point O O O is the optical center position of the main camera, which is the origin of the physical coordinate system in Figure 1. Point P 1 P_1 P1 is the position of the optical center of the secondary camera lens, O P 1 OP_1 OP1 is the optical center offset of the two lenses. Note the point O O O 和点 P 1 P_1 P1The direction of the coordinate system established on is consistent. The yaw, pitch and rotation of the lens can be processed through other steps, so only the translation of the optical center is involved here. The coordinates of the points in Figure 2 are all based on point O O O It's very original.

       Suppose there is a control point (that is, the point used for parameter optimization) on the fisheye image of the secondary camera, and its spherical coordinates obtained after inverse mapping of the fisheye projection model are P 2 P_2 P2, then the actual perspective projection point corresponding to the control point should be P 3 P_3 P3. We can only know that the real spatial coordinate point corresponding to the control point is at point P 1 P_1 P1 Sum P 3 P_3 P3, but the corresponding distance cannot be known. At this time we need some assumptions, such as assuming that at point P 0 P_0 P0There is a plane at , such as a wall, with the straight line O P 0 OP_0 OP0 Vertical, inside P 0 P_0 P0 From point O O O is the known latitude and longitude coordinates on the sphere of the center of the sphere. Note that we do not need to know the true distance of the plane. When there is no object blocking in front of the plane, which is the foreground, we can know P 3 P_3 P3 actually comes from the straight line P 1 P 3 P_1P_3 P1P3The intersection of and the plane P 5 P_5 P5, where the straight line P 1 P 3 P_1P_3 P1P3 O P 0 OP_0 OP0The angle between cannot be greater than or equal to 90 degrees. In this way, the point P 5 P_5 can be obtainedP5 Straight line O P 5 OP_5 OP5 O O O The intersection point on the sphere with the center of the sphere P P The latitude and longitude coordinates of P.

       Based on the similarity of triangles, we have

P 1 A = P 1 P 3 → T ⋅ P 1 P 4 → P 1 P 4 = O P 2 → T ⋅ O P 0 → O P 0 , (1) {P_1}A = \frac{ { { {\overrightarrow { {P_1}{P_3}} }^T} \cdot \overrightarrow { {P_1}{P_4}} }}{ { {P_1}{P_4}}} = \frac{ { { {\overrightarrow {O{P_2}} }^T} \cdot \overrightarrow {O{P_0}} }}{ {O{P_0}}}, \tag{1} P1A=P1P4P1P3 TP1P4 =OP0OP2 TOP0 ,(1)

P 1 B = P 1 P 0 → T ⋅ P 1 P 4 → P 1 P 4 = ( O P 0 → − O P 1 → ) T ⋅ O P 0 → O P 0 , (2) {P_1}B = \frac{ { { {\overrightarrow { {P_1}{P_0}} }^T} \cdot \overrightarrow { {P_1}{P_4}} }}{ { {P_1}{P_4}}} = \frac{ { { {\left( {\overrightarrow {O{P_0}} - \overrightarrow {O{P_1}} } \right)}^T} \cdot \overrightarrow {O{P_0}} }}{ {O{P_0}}}, \tag{2} P1B=P1P4P1P0 TP1P4 =OP0(OP0 OP1 )TOP0 ,(2)

P 1 P 3 P 1 P 5 = P 1 A P 1 B ⇒ P 1 P 5 → = P 1 B P 1 A ⋅ P 1 P 3 → . (3) \frac{ { {P_1}{P_3}}}{ { {P_1}{P_5}}} = \frac{ { {P_1}A}}{ { {P_1}B}} \Rightarrow \overrightarrow { {P_1}{P_5}} = \frac{ { {P_1}B}}{ { {P_1}A}} \cdot \overrightarrow { {P_1}{P_3}} . \tag{3} P1P5P1P3=P1BP1AP1P5 =P1AP1BP1P3 .(3)

So,

O P 5 → = O P 1 → + P 1 P 5 → = O P 1 → + P 1 B P 1 A ⋅ O P 2 → = O P 1 → + ( O P 0 → − O P 1 → ) T ⋅ O P 0 → O P 2 → T ⋅ O P 0 → ⋅ O P 2 → . (4) \overrightarrow {O{P_5}} = \overrightarrow {O{P_1}} + \overrightarrow { {P_1}{P_5}} = \overrightarrow {O{P_1}} + \frac{ { {P_1}B}}{ { {P_1}A}} \cdot \overrightarrow {O{P_2}} = \overrightarrow {O{P_1}} + \frac{ { { {\left( {\overrightarrow {O{P_0}} - \overrightarrow {O{P_1}} } \right)}^T} \cdot \overrightarrow {O{P_0}} }}{ { { {\overrightarrow {O{P_2}} }^T} \cdot \overrightarrow {O{P_0}} }} \cdot \overrightarrow {O{P_2}} .\tag{4} OP5 =OP1 +P1P5 =OP1 +P1AP1BOP2 =OP1 +OP2 TOP0 (OP0 OP1 )TOP0 OP2 .(4)

Similarly, if we know P P P Since O O O The point is the longitude and latitude coordinates on the sphere with the center of the sphere, then the point P 5 P_5 P5 Where are you P 1 P_1 P1 The perspective projection point on the sphere whose point is the center of the sphere P 3 P_3 P3 向对于Point P 1 P_1 P1The coordinates are

O P 5 O P = O P 0 O P → T O P 0 → / O P 0 ⇒ O P → 5 = O P 0 2 O P → T O P 0 → O P → , (5) \frac{ {O{P_5}}}{ {OP}} = \frac{ {O{P_0}}}{ { { {\overrightarrow {OP} }^T}\overrightarrow {O{P_0}} /O{P_0}}} \Rightarrow {\overrightarrow {OP} _5} = \frac{ {OP_0^2}}{ { { {\overrightarrow {OP} }^T}\overrightarrow {O{P_0}} }}\overrightarrow {OP} ,\tag{5} OPOP5=OP TOP0 /OP0OP0OP 5=OP TOP0 OP02OP ,(5)

P 1 P 5 → = O P 5 → − O P 1 → ⇒ O P 2 → = P 1 P 3 → = P 1 P 5 → / ∥ P 1 P 5 → ∥ . (6) \overrightarrow { {P_1}{P_5}} = \overrightarrow {O{P_5}} - \overrightarrow {O{P_1}} \Rightarrow \overrightarrow {O{P_2}} = \overrightarrow { {P_1}{P_3}} = \overrightarrow { {P_1}{P_5}} /\left\| {\overrightarrow { {P_1}{P_5}} } \right\|.\tag{6} P1P5 =OP5 OP1 OP2 =P1P3 =P1P5 /P1P5 .(6)

       The premise of the plane hypothesis is that all points come from the same plane, which is why we do not need to know the real distance to the plane (but we need to know the specific orientation). However, this is not always true. On the contrary, in many scene applications it is impossible to have a plane that can cover most of the field of view. Therefore, this method is more suitable for multi-camera stitching of large-area flat images such as wall graffiti. Of course, this is also very suitable for estimating the optical center distance of multi-eye fisheye cameras, because the control points appear in the overlapping fields of view of each lens, and this part of the field of view is relatively small. We can use a checkerboard to Provide a large enough surface. However, even if we have obtained a relatively accurate estimate of the optical center distance, in actual fisheye image correction, if we cannot provide the required plane, it is still difficult to avoid parallax due to the complexity of the sources of spatial points.

Guess you like

Origin blog.csdn.net/qq_33552519/article/details/121039023