Fast Global Registration Fast global registration


insert image description here

This article comes from CSDN Dianyunxia, ​​the reptile is self-respecting and treats itself as an individual.

1. Algorithm process

  The Fast Global Registration algorithm uses quaternary constraints to eliminate wrong relationship pairs in the matching relationship set, and finally realizes the registration of two sets of point clouds based on the optimization solution of the relationship set. The algorithm is mainly divided into three steps: the generation of relation set based on quaternary constraints, the construction of objective function based on relation set, and the optimization process of objective function.

2. Generation of quaternary constraint relationship sets

  The generation of relational sets based on quaternary constraints is mainly divided into four stages: the calculation of FPFH feature descriptors of point sets, the generation of initial relational sets, the mutual nearest neighbor constraints of relational sets, and the quaternary constraints of relational sets.

  1. Calculation of the FPFH feature descriptor of the point set: use the fast feature histogram (FPFH) to describe the feature information of each point. The reason for choosing this feature descriptor is that it can be calculated quickly in a short time, and at the same time, more feature information in the field around the feature point can be obtained.
  2. Generation of initial relation set: For point cloud PPFor each point p ∈ P p ∈ Pin PpP , in the feature setF ( Q ) F( Q)F ( Q ) findsF ( p ) F ( p )The nearest neighbor of F ( p ) , and for the point cloudQQEvery q in Q ∈ Q q∈QqQ , considered inF ( P ) F( P)FindF ( q ) F( q ) in F ( P )The nearest neighbor of F ( q ) . Store the initial relationship pairs generated by the above steps into the setK 1 K_1K1middle. This relation set K 1 K_1K1can be used as input during optimization of the objective function. In fact, K 1 K_1K1has a very high number of false-matching pairs (i.e. has high outliers). Next, reduce the relation set K 1 K_1 by performing constrained optimization on the relation setK1false relation pairs, reducing the number of outliers.
  3. Mutual nearest neighbor bundles of relationship sets: For the relationship set K 1 K_1 generated in the previous stepK1, randomly pick a set of relationship pairs ( p , q ) from which (p, q)( p , q ) if and only if inF ( P ) F( P)F ( P ) ,F ( p ) F( p)F ( p ) isF ( q ) F( q )The nearest neighbor of F ( q ) , at the same time inF ( Q ) F( Q)F(Q) 中, F ( q ) F( q) F ( q ) isF (p) F(p)The nearest neighbor of F ( p ) . For the relationship pair ( p , q ) ( p , q )that satisfy the mutual nearest neighbor constraint( p , q ) , store it into the setK 2 K_2K2middle.
  4. Quaternary Constraints on Relational Sets: In order to reduce the relational set K 2 K_2K2The impact of the wrong matching point pair on the experiment, consider the K 2 K_2K2Adding a further constraint, randomly from K 2 K_2K2Select 4 groups of relationship pairs ( p 1 , q 1 ), ( p 2 , q 2 ), ( p 3 , q 3 ), ( p 4 , q 4 ) ( p_1, q_1), ( p_2, q_2 ), ( p_3, q_3) , (p_4, q_4)(p1q1)(p2q2)(p3q3)(p4q4) , check the quadruple( p 1 , p 2 , p 3 , p 4 ) ( p_1, p_2, p_3, p_4)(p1p2p3p4) ( q 1 , q 2 , q 3 , q 4 ) ( q_1,q_2,q_3 ,q_4) (q1q2q3q4) compatibility. Specifically, whether the four sets of relationship pairs meet the following conditions:
    insert image description here

where τ = 0.95 τ = 0.95t=0.95 . Intuitively, this constraint verifies whether the four sets of relation pairs are compatible. Store the tuple relationship pairs that meet the conditions in the setK 3 K_3K3middle. Figure 1 is a diagram of the quaternary constraints.
insert image description here
After the above two conditions are constrained, the relationship set K = K 3 K = K_3K=K3

3. Construction of objective function based on relational set

  Point cloud PPP andQQFor the registration of Q , the goal is to find an optimal rigid transformation matrixTTT , the point cloudQQQ andPPP registration. algorithm optimized based onPPP andQQA robust objective function for the correspondence between Q. First, relation sets are established by performing fast feature matching and relation constraints. During the optimization of the objective function, the relation set does not need to be recalculated, so the algorithm has a great advantage for the registration of dense and complex point clouds.
  SetK = ( p , q ) K = { ( p , q ) }K=( p , q ) is determined byPPP andQQThe relationship set generated by the matching points in Q , the objective function is established based on the sum of the squares of the errors between the corresponding relationship pairs in the relationship set. The specific form of the objective function is as follows:
insert image description here
  hereρ ( ⋅ ) ρ( )ρ ( ) is a robust penalty function. Proper use of a robust penalty function is very important, because many error terms in the objective function (2) are generated by wrong matching relations, which can reduce the impact of wrong relations on the objective function, thereby achieving better matching quasi-precision. At the same time, in order to ensure the calculation speed, it is not desirable to perform additional calculations such as downsampling and verification during the optimization process. In this paper, an estimatorρρ , which will automatically perform the verification without incurring additional computational cost. The specific expression is:
insert image description here
insert image description here

  Figure 2 shows the different μ μGeman-McClure estimator image of μ value. As can be seen from Figure 2, the residuals are penalized in a least-squares manner, while the fast flattening of the estimator neutralizes the outliers of the relation set. Parameterμ μμ controls the significant impact of residuals on the objective function.
  Because equation (2) is not easy to be directly optimized and solved, line processing is introduced. Specifically, letL = lp, q, 0 < lp, q < 1 L = {lp, q}, 0 < lp, q < 1L=lp , q , 0 < lp , q < 1 , the line processing indicates the relation pairppp andqqA discontinuity between q , when lp, q → 0 lp, q → 0lpq0 discontinuity exists, or whenlp, q → 1 lp, q → 1lpq1 Discontinuity does not exist. Optimize aboutTTT andLLThe joint objective function of L , the specific form is as follows:
insert image description here

  Here Ψ ( lp , q ) Ψ ( lp , q )Ψ ( lp , q ) is a prior term, it is a penalty function, which means that forppp andqqThe penalty for discontinuity between q
insert image description here
is of the form: In order to makeE(T,L)E(T,L)E ( T , L ) is minimized for equation (5) with respect tolp, q lp, qlp , q to find the partial derivative, let the derivative function be zero, and get the following formula:
insert image description here

Solve for lp, q lp, qlp , q , the calculation results are as follows:
insert image description here

Finally, the lp, q lp, qlp , q intoE(T, L) E(T, L)In E ( T , L ) , Equation (4) is transformed into Equation (2). Therefore, the transformation matrix TTproduced by optimizing the objective function (4)T , is also the optimal solution for the original objective function (2).

  Equation (4) is non-convex, and its shape is determined by the parameter μ of the penalty function (Equation (3))μ to control. In order to setμ μμ and reduce the influence of the local minimum, using the gradient non-convexity method to solve. From equation (4), it can be seen thatμ μμ balances the prior and alignment terms. Largerμ μμ makes the objective function smoother and allows some false relation pairs to participate in the optimization, even if they cannot be transformedTTT is tightly aligned. The optimization of the objective function (4) of the algorithm in this paper begins with a very largeμ = D 2 μ = D^2m=D2 values ​​whereDDD is the diameter of the largest surface. Parameterμ μμ keeps decreasing during optimization untilμ < δ 2 μ < δ^2m d2 stop optimization, whereδ δδ is the distance threshold of the ground truth relationship.

4. Optimization of the objective function

  Locally linearize the transformation matrix T into a 6-dimensional vector ξ = ( w , t ) = ( α , β , γ , a , b , c ) ξ=(w,t)=(\alpha,\beta,\ gamma,a,b,c)X=(w,t)=( a ,b ,c ,a,b,c ) This vector contains the rotation componentwww and the translation componentttt T T Tξ ξThe linear function of ξ is approximated as:
insert image description here

  Here T k T^kTk is the transformation matrix found in the previous iteration. Equation (4) becomes aboutξ ξξ is the least squares objective function. Consider using the Gauss-Newton method to computeξ ξThe value of ξ , get:
insert image description here
  whererrr is the residual vector,J r JrJ r is its Jacobian matrix. ξ ξobtained by using equation (9)ξ andT k T^kTThe value of k , to update TTby equation (8)T. _ The two steps optimize the same objective function (equation (4)), so the optimization process can ensure the convergence of the algorithm. The flow chart of the algorithm in this paper is shown in Table 1.
insert image description here

Guess you like

Origin blog.csdn.net/qq_36686437/article/details/131768327