RF2O-2D laser mileage calculation method source code analysis and detailed derivation of related formulas


  I read this paper a long time ago, but at that time it was still based on multi-line lidar. Recently, I have returned to 2D laser odometry, so I re-read this part of the code and recorded it by the way!
For the previous principle part, please refer to: Principle of RF2O laser mileage calculation method .
Some theoretical introduction is missing in this paper, which is found in another article DI-FODO - Fast visual odometry of 3D distance sensor ~

R ( t , α ) R(t,α)R ( t , α ) is the range scan, wherettt is time,α ∈ [ 0 , N ) ⊂ R α \in [0 , N) ⊂ Ra[0N)R is the scan coordinates and N is the size of the scan. The position of any point P relative to the local reference frame connected to the sensor is given by its polar coordinates(r, θ) (r, θ)( r , θ ) is given by, if point P is scanned from lidar, thenaaa is as shown below, where FOV is the scanner's field of view:
Insert image description here

1.CMakeLists takes off

Let’s CMakeListsstart with ~

find_package(catkin REQUIRED COMPONENTS  
  roscpp
  rospy
  nav_msgs
  sensor_msgs
  std_msgs
  tf  
)
find_package(Boost REQUIRED COMPONENTS system)
find_package(cmake_modules REQUIRED)
find_package(Eigen REQUIRED)
find_package(MRPT REQUIRED base obs maps slam)

  Because I plan to go there later ROS, I pay more attention to the dependencies of the project. Here is just one tf, which seems to be easier. There is also an unfamiliar library MRPT, which is introduced on the official website like this:

  The Mobile Robot Programming Toolkit (MRPT) is a broad, cross-platform, open source C++ library designed to help robotics researchers design and implement (primarily) applications in simultaneous localization and mapping (SLAM), computer vision, and motion planning (obstacle avoidance). ). The advantage is code efficiency and reusability.
  These libraries are used to easily manage classes for 3D (6D) geometry, probability density functions (pdf) for many predefined variables (points and poses, landmarks, maps), Bayesian estimation (Kalman filters, particle filters) , image processing, path planning and obstacle avoidance, 3D visualization of all types of maps (points, raster maps, road signs,…), etc.
  Efficiently collecting, manipulating, and inspecting very large robotic data sets (Rawlogs) is another goal of MRPT, supported by multiple classes and applications.

  You can only know how it is used here by reading the code. All the codes are included in one cpp, okay!

add_executable(rf2o_laser_odometry_node src/CLaserOdometry2D.cpp)

int main(int argc, char** argv)
{
    
    
    ros::init(argc, argv, "RF2O_LaserOdom");
    CLaserOdometry2D myLaserOdom;
    ROS_INFO("initialization complete...Looping");
    ros::Rate loop_rate(myLaserOdom.freq);
    while (ros::ok()) {
    
    
        ros::spinOnce();        //Check for new laser scans
        if( myLaserOdom.is_initialized() && myLaserOdom.scan_available() ) {
    
                
            //Process odometry estimation
            myLaserOdom.odometryCalculation();
        }
        else {
    
    
            ROS_WARN("Waiting for laser_scans....") ;
        }
        loop_rate.sleep();
    }
    return(0);
}

2.main function

The first is to create a new laser odometry object. In the constructor of this class, the name of the laser data is defined topic: /laser_scan, the published tfcoordinate system relationship /base_link , /odom, and the published frequency freqare some basic parameters. Then the receiving and sending are defined topic.

In mainthe main loop of the function, it is necessary to determine whether initialization has been carried out Init()and whether there is new laser data before the laser odometry is calculated. It is equivalent to two threads running. One receives data through message callback. When there is new data, the main loop will call the laser odometer to calculate the position information.

3. Callback function and data initialization


void CLaserOdometry2D::LaserCallBack(const sensor_msgs::LaserScan::ConstPtr& new_scan) {
    
    
    //Keep in memory the last received laser_scan
    last_scan = *new_scan;
    //Initialize module on first scan
    if (first_laser_scan) {
    
    
        Init();
        first_laser_scan = false;
    }
    else {
    
       
        //copy laser scan to internal variable
        for (unsigned int i = 0; i<width; i++)
            range_wf(i) = new_scan->ranges[i];
        new_scan_available = true;
    }
}

In the callback function LaserCallBack, no processing is done. A frame is initialized Init(), and then the subsequent data is stored in range_wffor calculation of position information.

void CLaserOdometry2D::Init()
{
    
    
    // Got an initial scan laser, obtain its parametes 在第一帧激光数据,定义其参数
    ROS_INFO("Got first Laser Scan .... Configuring node");
    width = last_scan.ranges.size();    // Num of samples (size) of the scan laser 激光雷达的激光束数
    cols = width;						// Max reolution. Should be similar to the width parameter 
    fovh = fabs(last_scan.angle_max - last_scan.angle_min); // Horizontal Laser's FOV 激光雷达的水平视角
    ctf_levels = 5;                     // Coarse-to-Fine levels 设置一个由粗到细的 变换矩阵
    iter_irls = 5;                      //Num iterations to solve iterative reweighted least squares 最小二乘的迭代次数

    // Set laser pose on the robot (through tF) 通过 TF 设置激光雷达在小车上的位置
    // This allow estimation of the odometry with respect to the robot base reference system.
    // 得到的里程计信息是相对于机器人坐标系
    mrpt::poses::CPose3D LaserPoseOnTheRobot;
    tf::StampedTransform transform;
    try {
    
    
    	 // 得到激光坐标系 到 机器人坐标系的 转换关系
        tf_listener.lookupTransform("/base_link", last_scan.header.frame_id, ros::Time(0), transform);
    }
    catch (tf::TransformException &ex) {
    
    
        ROS_ERROR("%s",ex.what());
        ros::Duration(1.0).sleep();
    }

    //TF:transform -> mrpt::CPose3D (see mrpt-ros-bridge)
    const tf::Vector3 &t = transform.getOrigin(); // 平移
    LaserPoseOnTheRobot.x() = t[0];
    LaserPoseOnTheRobot.y() = t[1];
    LaserPoseOnTheRobot.z() = t[2];
    const tf::Matrix3x3 &basis = transform.getBasis(); // 旋转
    mrpt::math::CMatrixDouble33 R;
    for(int r = 0; r < 3; r++)
        for(int c = 0; c < 3; c++)
            R(r,c) = basis[r][c];
    LaserPoseOnTheRobot.setRotationMatrix(R);

    //Set the initial pose 设置激光雷达的初始位姿,在机器人坐标系下
    laser_pose = LaserPoseOnTheRobot;
    laser_oldpose = LaserPoseOnTheRobot;
    
    // Init module 初始化模型
    //-------------
    range_wf.setSize(1, width);

    //Resize vectors according to levels 初始化 由粗到精 的变化矩阵
    transformations.resize(ctf_levels);
    for (unsigned int i = 0; i < ctf_levels; i++)
        transformations[i].resize(3, 3);

    //Resize pyramid 调整金字塔大小
    unsigned int s, cols_i;
    const unsigned int pyr_levels = round(log2(round(float(width) / float(cols)))) + ctf_levels; // 金字塔层数,
    range.resize(pyr_levels);
    range_old.resize(pyr_levels);
    range_inter.resize(pyr_levels);
    xx.resize(pyr_levels);
    xx_inter.resize(pyr_levels);
    xx_old.resize(pyr_levels);
    yy.resize(pyr_levels);
    yy_inter.resize(pyr_levels);
    yy_old.resize(pyr_levels);
    range_warped.resize(pyr_levels);
    xx_warped.resize(pyr_levels);
    yy_warped.resize(pyr_levels);

    for (unsigned int i = 0; i < pyr_levels; i++) {
    
    
        s = pow(2.f, int(i));
        //每一层的尺度,1 - 0.5 - 0.25 - 0.125 - 0.0625
        cols_i = ceil(float(width) / float(s)); 
        
        range[i].resize(1, cols_i);
        range_inter[i].resize(1, cols_i);
        range_old[i].resize(1, cols_i);
        range[i].assign(0.0f);
        range_old[i].assign(0.0f);
        xx[i].resize(1, cols_i);
        xx_inter[i].resize(1, cols_i);
        xx_old[i].resize(1, cols_i);
        xx[i].assign(0.0f);
        xx_old[i].assign(0.0f);
        yy[i].resize(1, cols_i);
        yy_inter[i].resize(1, cols_i);
        yy_old[i].resize(1, cols_i);
        yy[i].assign(0.f);
        yy_old[i].assign(0.f);

        if (cols_i <= cols) {
    
    
            range_warped[i].resize(1, cols_i);
            xx_warped[i].resize(1, cols_i);
            yy_warped[i].resize(1, cols_i);
        }
    }

    dt.resize(1, cols);
    dtita.resize(1, cols);
    normx.resize(1, cols);
    normy.resize(1, cols);
    norm_ang.resize(1, cols);
    weights.setSize(1, cols);
    null.setSize(1, cols);
    null.assign(0);
    cov_odo.assign(0.f);

    fps = 1.f;		//In Hz
    num_valid_range = 0; // 有效的激光距离数据个数

    // Compute gaussian mask 计算高斯掩码  [1/16,1/4,6/16,1/4,1/16] 用于计算加权平均值
    g_mask[0] = 1.f / 16.f; g_mask[1] = 0.25f; g_mask[2] = 6.f / 16.f; g_mask[3] = g_mask[1]; g_mask[4] = g_mask[0];
	
    kai_abs.assign(0.f);
    kai_loc_old.assign(0.f);
 
    module_initialized = true; // 模型初始化完成
    last_odom_time = ros::Time::now();
}

In general, during the initialization process, the parameter information of the laser data is loaded and some containers are initialized. Now that the parameters, containers required for calculation, and laser data are like lambs to be slaughtered, it’s time to start working.


4. Main loop for calculating laser odometry

The following is mainthe function odometryCalculationto start calculating the laser odometry. In the original code, the comments are also very clear.

4.1 Create image pyramid createImagePyramid

The first is to create the image pyramid createImagePyramid,

Weighted mean calculation, if it is a boundary, only half is calculated:
Insert image description here

void CLaserOdometry2D::createImagePyramid() {
    
    
	const float max_range_dif = 0.3f; // 最大的距离差值
	//Push the frames back 更新数据
	range_old.swap(range);
	xx_old.swap(xx);
	yy_old.swap(yy);

  // 金字塔的层数与里程计计算中使用的层数不匹配(因为我们有时希望以较低的分辨率完成)
  unsigned int pyr_levels = round(log2(round(float(width)/float(cols)))) + ctf_levels;

  //Generate levels 开始创建金字塔
  for (unsigned int i = 0; i<pyr_levels; i++) {
    
    
    unsigned int s = pow(2.f,int(i)); // 每次缩放 2 倍
    cols_i = ceil(float(width)/float(s));
    const unsigned int i_1 = i-1; // 上一层

    //First level -> Filter (not downsampling); 
    if (i == 0) {
    
     // i = 0 第一层进行滤波
      for (unsigned int u = 0; u < cols_i; u++) {
    
    	
        const float dcenter = range_wf(u);	
        //Inner pixels  内部像素的处理, u从第三列开始到倒数第三列
        if ((u>1)&&(u<cols_i-2)) {
    
    		 
          if (dcenter > 0.f) {
    
    	// 当前激光距离大于0,一般都满足
            float sum = 0.f; // 累加量,用于计算加权平均值
            float weight = 0.f; // 权重
            for (int l=-2; l<3; l++) {
    
     // 取附近五个点
              const float abs_dif = abs(range_wf(u+l)-dcenter); // 该点附近的五个点的距离,与当前点距离的绝对误差
              if (abs_dif < max_range_dif) {
    
     // 误差满足阈值
                const float aux_w = g_mask[2+l]*(max_range_dif - abs_dif); // 认为越靠近该点的 权重应该最大,加权平均值
                weight += aux_w;
                sum += aux_w*range_wf(u+l);
              }
            }
          range[i](u) = sum/weight;
          }
          else // 否则当前激光距离设为0
            range[i](u) = 0.f;
        }
        
        //Boundary 下面对边界像素的处理
        else {
    
    
          if (dcenter > 0.f){
    
    	 // 判断激光距离是否合法			
            float sum = 0.f;
            float weight = 0.f;
            for (int l=-2; l<3; l++)	{
    
     // 计算加权平均值,只不过需要判断是否会越界
              const int indu = u+l;
              if ((indu>=0)&&(indu<cols_i)){
    
    
                const float abs_dif = abs(range_wf(indu)-dcenter);										
                if (abs_dif < max_range_dif){
    
    
                  const float aux_w = g_mask[2+l]*(max_range_dif - abs_dif);
                  weight += aux_w;
                  sum += aux_w*range_wf(indu);
                }
              }
            }
            range[i](u) = sum/weight;
          }
          else
            range[i](u) = 0.f;
        }
      }
    }
    //-----------------------------------------------------------------------------
    // 从第二层开始,做下采样,同样也是分为内部点,和边界点进行处理,原理与第一层相同,计算加权平均值
    // 不过值得注意的是,从第二层开始 使用的数据是上一层下采样后的数据,即range[i_1],因此每次计算都会缩小2倍。
    // ......
  }
}

Above, we have calculated the point cloud positions contained in each layer of the pyramid. By borrowing a picture from Baidu Encyclopedia, we can intuitively see the effect of the image pyramid. However, this is a 2D pyramid, which reduces the resolution quite a bit, and uses the distance of nearby point clouds to calculate a "representative" through a weighted average.
Insert image description hereCalculate the coordinates of each layer point:
a = N − 1 FOV θ + N − 1 2 = ka θ + N − 1 2 a = {N-1 \over FOV} \theta + {N-1 \over 2} = k_a \theta+{N-1 \over 2}a=FOVN1i+2N1=kai+2N1

//Calculate coordinates "xy" of the points
for (unsigned int u = 0; u < cols_i; u++) {
    
    
    if (range_1[i](u) > 0.f){
    
    
        const float tita = -0.5f*fovh + (float(u) + 0.5f)*fovh/float(cols_i);
        xx_1[i](u) = range_1[i](u)*cos(tita);
        yy_1[i](u) = range_1[i](u)*sin(tita);
}
else{
    
    
        xx_1[i](u) = 0.f;
        yy_1[i](u) = 0.f;
}

4.2 Eight steps of pyramid processing

Then the processing of each layer of the pyramid is divided into eight steps, which is also the core part. First, the entire process will be introduced, and each function will be introduced in detail later.

void CLaserOdometry2D::odometryCalculation()
{
    
    
    m_clock.Tic();
    createImagePyramid();  // 计算金字塔
    
    //Coarse-to-fine scheme // 从金字塔的顶端开始开始
    for (unsigned int i=0; i<ctf_levels; i++) {
    
    
        //Previous computations  首先把旋转矩阵初始化
        transformations[i].setIdentity();

        level = i; // 记录目前计算进度,因为上一层的计算会当作下一层的初值,由粗到细的迭代
        unsigned int s = pow(2.f,int(ctf_levels-(i+1)));  // 从金字塔的最顶端开始,所以需要做一下处理,计算出最高层的缩放系数
        cols_i = ceil(float(cols)/float(s));  // 缩放后的数据长度
        image_level = ctf_levels - i + round(log2(round(float(width)/float(cols)))) - 1; // 当前需要处理的金字塔层数

        //1. Perform warping 
        if (i == 0) {
    
    
            range_warped[image_level] = range[image_level];
            xx_warped[image_level] = xx[image_level];
            yy_warped[image_level] = yy[image_level];
        }
        else
            performWarping(); 

        //2. Calculate inter coords  
        calculateCoord();

        //3. Find null points 
        findNullPoints();

        //4. Compute derivatives 
        calculaterangeDerivativesSurface();

        //5. Compute normals 
        //computeNormals();

        //6. Compute weights 
        computeWeights();

        //7. Solve odometry 
        if (num_valid_range > 3) {
    
    
            solveSystemNonLinear();
            //solveSystemOneLevel();    //without robust-function
        }

        //8. Filter solution 
        filterLevelSolution(); 
    }

    m_runtime = 1000*m_clock.Tac();
    ROS_INFO("Time odometry (ms): %f", m_runtime);

    //Update poses
    PoseUpdate();
    new_scan_available = false;     //avoids the possibility to run twice on the same laser scan
}

4.2.1 calculateCoord

We follow the order of processing. The highest level does not perform data conversion, so the coordinates are calculated directly.

void CLaserOdometry2D::calculateCoord()
{
    
    		
	for (unsigned int u = 0; u < cols_i; u++) // 遍历该层的所有数据信息
	{
    
    
	  // 如果上一帧该层的数据 或者这一帧数据 为0就不处理
		if ((range_old[image_level](u) == 0.f) || (range_warped[image_level](u) == 0.f))
		{
    
    
			range_inter[image_level](u) = 0.f;
			xx_inter[image_level](u) = 0.f;
			yy_inter[image_level](u) = 0.f;
		}
		else // 否则 取两者均值
		{
    
    
			range_inter[image_level](u) = 0.5f*(range_old[image_level](u) + range_warped[image_level](u));
			xx_inter[image_level](u) = 0.5f*(xx_old[image_level](u) + xx_warped[image_level](u));
			yy_inter[image_level](u) = 0.5f*(yy_old[image_level](u) + yy_warped[image_level](u));
		}
	}
}

4.2.2 findNullPoints

  Find some points with a distance of 0, because during the previous pyramid scaling, the illegal laser data with a small distance and 0 were set to 0.

void CLaserOdometry2D::findNullPoints()
{
    
    
	//Size of null matrix is set to its maximum size (constructor)
	// 记录空值的矩阵是 最大的数据大小,也就是和最底层的金字塔一样,不会访问越界
	num_valid_range = 0;
	for (unsigned int u = 1; u < cols_i-1; u++)
	{
    
    
		if (range_inter[image_level](u) == 0.f)
			null(u) = 1;
		else
		{
    
    
			num_valid_range++; // 记录有效激光点的个数
			null(u) = 0;
		}
	}
}

4.2.3 Compute derivatives

  Our algorithm pays special attention to rangethe calculation of gradients. Typically, a fixed discrete formula is used to approximate the scan or image gradient. In rangethe case of data, this strategy results in very high gradient values ​​at object boundaries, which do not represent the true gradients on these objects. As an alternative, we use an adaptive formulation with respect to the environment geometry. This formula weights the forward and backward derivatives in a scan using the 2D distance between consecutive observations (points):
d ( a ) = ∣ ∣ ( x ( a ) − x ( a − 1 ) , y ( a ) − y ( a − 1 ) ) ∣ ∣ d(a) = ||(x(a) - x(a-1),y(a) - y(a-1))||d(a)=(x(a)x(a1),and ( a )and ( a1))

// 计算相邻两点的 距离差值
for (unsigned int u = 0; u < cols_i-1; u++)
{
    
    
	const float dist = square(xx_inter[image_level](u+1) - xx_inter[image_level](u))
						+ square(yy_inter[image_level](u+1) - yy_inter[image_level](u));
	if (dist  > 0.f)
		rtita(u) = sqrt(dist);
}

R a − ( a ) = R ( a ) − R ( a − 1 ) , R a + = R ( a + 1 ) − R ( a ) R^-_a(a) = R(a) - R(a-1), R^+_a= R(a+1) -R(a) Ra(a)=R(a)R(a1),Ra+=R(a+1)R(a) R a ( a ) = d ( a + 1 ) R a − ( a ) + d ( a ) R a + ( a ) d ( a + 1 ) + d ( a ) R_a(a) = {d(a+1)R^-_a(a)+d(a)R^+_a(a) \over d(a+1)+d(a)} Ra(a)=d(a+1)+d(a)d(a+1)Ra(a)+d(a)Ra+(a)


//Spatial derivatives 
for (unsigned int u = 1; u < cols_i-1; u++)
dtita(u) = (rtita(u-1)*(range_inter[image_level](u+1)-range_inter[image_level](u)) + rtita(u)*    
    (range_inter[image_level](u) - range_inter[image_level](u-1)))/(rtita(u)+rtita(u-1));
// 第一对点 和 最后一对点 单独赋值
dtita(0) = dtita(1);
dtita(cols_i-1) = dtita(cols_i-2);

Therefore, the nearest neighbors always contribute more to the gradient calculation, while far points hardly affect it. In the case where two neighbors are approximately equidistant, the proposed formula is equivalent to the central difference method .
Insert image description here

	//Temporal derivative 对时间求导得到速度
	for (unsigned int u = 0; u < cols_i; u++)
		dt(u) = fps*(range_warped[image_level](u) - range_old[image_level](u));

4.2.4 computeNormals

void CLaserOdometry2D::computeNormals()
{
    
    
	normx.assign(0.f);
	normy.assign(0.f);
	norm_ang.assign(0.f);

	const float incr_tita = fovh/float(cols_i-1);  
	for (unsigned int u=0; u<cols_i; u++)
	{
    
    
		if (null(u) == 0.f)
		{
    
    
			const float tita = -0.5f*fovh + float(u)*incr_tita; 
			const float alfa = -atan2(2.f*dtita(u), 2.f*range[image_level](u)*incr_tita);
			norm_ang(u) = tita + alfa;
			if (norm_ang(u) < -M_PI)
				norm_ang(u) += 2.f*M_PI;
			else if (norm_ang(u) < 0.f)
				norm_ang(u) += M_PI;
			else if (norm_ang(u) > M_PI)
				norm_ang(u) -= M_PI;

			normx(u) = cos(tita + alfa);
			normy(u) = sin(tita + alfa);
		}
	}
}

4.2.5 computeWeights

  According to the code, we deduce the formula, where ka = N − 1 FOV k_a = {N-1 \over FOV}ka=FOVN1, then the angle of the laser beam can be solved:
a = N − 1 FOV θ + N − 1 2 = ka θ + N − 1 2 (1) a = {N-1 \over FOV} \theta + {N- 1 \over 2} = k_a \theta+{N-1 \over 2} \tag{1}a=FOVN1i+2N1=kai+2N1(1)

const float kdtita = float(cols_i-1)/fovh;

  Non-satisfaction of the rigidity assumption and deviations from the linear approximation can lead to inaccurate constraints imposed by the scan points on the sensor motion, and although Cauchy M-estimatorcan mitigate their impact on the overall motion estimate, it does not completely eliminate it. Cauchy M-estimatorIt is difficult to detect the presence of moving objects before solving for the motion, therefore, only use to reduce their weight during the minimization process . On the other hand, deviations from the linear approximation can be detected in advance, which helps speed up optimization convergence and leads to more accurate results.

  To this end, we propose a pre-weighting strategy to reduce the residuals at those points where the range function is non-linear or even non-differentiable. We call this "pre-weighting" because it is applied before solving the minimization problem. To quantify the error associated with linearization, we extend the Taylor series to second order:
r ˙ = R t + R aa ˙ + R 2 o ( Δ t , a ˙ ) + O ( Δ t 2 , a ˙ ) \ dot r = R_t + R_a\dot a +R_{2o}(\Delta t,\dot a) +O(\Delta t^2,\dot a)r˙=Rt+Raa˙+R2 o(Δt,a˙)+O ( Δt _2,a˙) R 2 o ( Δ t , a ˙ ) = Δ t 2 ( R t t + R t a a ˙ + R a a a ˙ 2 ) (13) R_{2o} (\Delta t,\dot a)={\Delta t \over 2}(R_{tt}+R_{ta}\dot a +R_{aa}\dot a^2) \tag{13} R2 o(Δt,a˙)=2Δt(Rtt+Rt aa˙+Raaa˙2)(13)

It can be noted that, ignoring higher order terms, R 2 o ( Δ t , a ˙ 2 ) R_{2o}(Δt, \dot a^2)R2 o(Δt,a˙The second derivative of 2 )A special case is the second derivative with respect to time( R tt ) (R_{tt})(Rtt) , it cannot be calculated in a coarse-to-fine scheme because the distorted image is not affected by time, and therefore the concept of second-order time derivatives is meaningless. Furthermore, it is important to detect those scan regions where the range function is not only nonlinear but also nondifferentiable. These regions are mainly observed edges of different objects, and usually with very high first derivatives(R t , R α ) (R_t,R_α)(Rt,Ra) value is a feature. To penalize both effects, nonlinearity and discontinuity, we define the following preweighting function for each scan point:
w ‾ = 1 ε + R α 2 + Δt 2 R t 2 + K d ( R α α 2 + Δ t 2 R t α 2 ) \overline w = {1 \over \sqrt{\varepsilon+R_α^2+ Δt^2R_t^2+K_d(R_{αα}^2+ Δt^2R_{tα} ^2)}}w=e+Ra2+t2 Rt2+Kd(Ra a2+t2 Rt a2) 1

const float ini_dtita = range_old[image_level](u+1) - range_old[image_level](u-1);  // 与上一帧的该处数据的距离差
const float final_dtita = range_warped[image_level](u+1) - range_warped[image_level](u-1);

const float dtitat = ini_dtita - final_dtita;
const float dtita2 = dtita(u+1) - dtita(u-1);

const float w_der = kdt*square(dt(u)) + kdtita*square(dtita(u)) + k2d*(abs(dtitat) + abs(dtita2));

Parameter K d K_dKdMarks the relative importance of the first and second derivatives, ε \varepsilonε is a small constant to avoid strange situations.


4.2.6 solveSystemNonLinear

  In practice, lidar motion cannot be estimated by just three independent constraints because, in general, (8) is inexact due to noise in the distance measurement, errors arising from the linear approximation (3) or the presence of moving objects (non-stationary) environment of). Therefore, we propose a density formulation where all points scanned contribute to motion estimation. For each point, we take the geometric residual ρ ( ξ ) ρ(ξ)ρ ( ξ ) is defined as for a given twistξ ξDetermine the ξ solution function (8)
ρ ( ξ ) = R t + ( xsin θ − ycos θ − R aka ) ω + ( cos θ + R akasin θ r ) vx + ( sin θ − R akacos θ r ) vy (9) ρ(ξ) = R_t +(xsin\theta - y cos\theta -R_a)\omega + (cos\theta + {R_asal\theta \over r})v_x +(sin\theta - {R_acos \theta\over r})v_y\tag{9}ρ ( ξ )=Rt+(xsinθycosθRaka) oh+(cosθ+rRakasinθ)vx+(sinθrRakacosθ)vy(9)
k a = N − 1 F O V k_a = {N-1 \over FOV} ka=FOVN1( xsin θ − ycos θ − R aka ) ω + ( cos θ + R akasin θ r ) vx + ( sin θ − R akacos θ r ) vy = − R t (xsin\theta - y cos\theta -R_ak_a)\ omega + (cos\theta + {R_acos\theta\over r})v_x +(sin\theta - {R_acos\theta\over r})v_y = -R_t(xsinθycosθRaka) oh+(cosθ+rRakasinθ)vx+(sinθrRakacosθ)vy=Rt( cos θ + R cos θ rsin θ − R cos θ rxsin θ − ycos θ − R aka ) ( vxvywz ) = − R t \begin{pmatrix} cos\theta + {R_ac_sin\theta \over r} \\sin\; theta - {R_acos\theta \over r} \\ xsin\theta - y cos\theta -R_a \\ end{pmatrix}\begin{pmatrix} v_x & v_y & w_z \\ end{pmatrix} = -R_tcosθ+rRakasinθsinθrRakacosθxsinθycosθRaka(vxvywz)=Rt A T x = B A^Tx=B ATx=B

for (unsigned int u = 1; u < cols_i-1; u++)
    if (null(u) == 0)
    {
    
    
        // Precomputed expressions
        const float tw = weights(u);
        const float tita = -0.5*fovh + u/kdtita;
        //Fill the matrix A
        A(cont, 0) = tw*(cos(tita) + dtita(u)*kdtita*sin(tita)/range_inter[image_level](u));
        A(cont, 1) = tw*(sin(tita) - dtita(u)*kdtita*cos(tita)/range_inter[image_level](u));
        A(cont, 2) = tw*(-yy[image_level](u)*cos(tita) + xx[image_level](u)*sin(tita) - dtita(u)*kdtita);
        B(cont,0) = tw*(-dt(u));
        cont++;
    }
//Solve the linear system of equations using a minimum least squares method
MatrixXf AtA, AtB;
AtA.multiply_AtA(A);
AtB.multiply_AtB(A,B);
Var = AtA.ldlt().solve(AtB);
//Covariance matrix calculation 	Cov Order -> vx,vy,wz
MatrixXf res(num_valid_range,1);
res = A*Var - B;

  In order to better handle outliers (such as moving objects in the environment), a more robust Cauchy M-estimatorkernel function is used instead of the 2 norm. The F function is Cauchy M-estimator, kkk is an adjustable parameter.
F ( p ) = k 2 2 ln ( 1 + 1 k 2 ) (11) F(p) = {k^2\over 2}ln(1+{1\over k}^2) \tag{11}F(p)=2k2ln(1+k12)(11)

float aver_dt = 0.f, aver_res = 0.f; unsigned int ind = 0;
for (unsigned int u = 1; u < cols_i-1; u++)
    if (null(u) == 0)
    {
    
    
        aver_dt += fabsf(dt(u));
        aver_res += fabsf(res(ind++));
    }
aver_dt /= cont; aver_res /= cont;
const float k = 10.f/aver_dt; //200
//float energy = 0.f;
//for (unsigned int i=0; i<res.rows(); i++)
//	energy += log(1.f + square(k*res(i)));
//printf("\n\nEnergy(0) = %f", energy);

  In contrast to the more common choice of L2 or L1 norm, this function reduces the correlation of points with very high residuals and represents an efficient and automatic way to deal with outliers. The optimization problem is solved by iteratively reweighted least squares (IRLS), where Cauchy M-estimatorthe weights associated with are:
w ( p ) = 1 1 + ( pk ) 2 (12) w(p) = {1\over 1+({p \over k})^2} \tag{12}w(p)=1+(kp)21(12)

const float res_weight = sqrt(1.f/(1.f + square(k*res(cont))));
//Fill the matrix Aw
Aw(cont,0) = res_weight*A(cont,0);
Aw(cont,1) = res_weight*A(cont,1);
Aw(cont,2) = res_weight*A(cont,2);
Bw(cont) = res_weight*B(cont);

  To obtain an accurate estimate, the sensor motion is calculated by minimizing robustall geometric residuals in the function F:
ξ M = argmin ξ ∑ i = 1 NF ( pi ( ξ ) ) (10) ξ_M = \underset{ξ}{argmin } \sum_{i=1}^{N}F(p_i(ξ)) \tag{10}XM=Xargmini=1NF(pi( ξ ) )( 1 0 )
Using IRLS (iterative weighted least squares), the system is solved iteratively by recalculating the residuals and subsequent weights until convergence.

//Solve the linear system of equations using a minimum least squares method
AtA.multiply_AtA(Aw);
AtB.multiply_AtB(Aw,Bw);
Var = AtA.ldlt().solve(AtB);
res = A*Var - B;

  IfMM _The M matrix is ​​positive definite, then the points of the scene provide sufficient geometric constraints to estimate the 3 DOF of motion. On the contrary, ifMMIf M does not have full rank, some linear velocity or angular velocity terms cannot be estimated, and the solutions provided by the solver for these variables are meaningless. This can be done by analyzing the covariance matrixΣ ξ ∈ R 3 × 3 Σ_ξ ∈ \mathbb R^{3×3}SxR3 × 3 to test this conjecture
Σ ξ = 1 N − 3 ∑ i = 1 N ri 2 ( ( A w ) TA w ) − 1 Σ_ξ = {1 \over N −3}∑^N_{i=1} r^2_i((A^w )^TA^w )^{−1}Sx=N31i=1Nri2((Aw)T Aw)1
insideri r_iriis the residual of the least squares solution.

cov_odo = (1.f/float(num_valid_range-3))*AtA.inverse()*res.squaredNorm();
kai_loc_level = Var;

4.2.7 filterLevelSolution

  The most common situation is that lidar only observes walls. In this case the motion parallel to the wall is undefined, so the solver will give it an arbitrary solution (not just our method, but any method based on pure geometry). To alleviate this problem, we use speed ξ ξA low-pass filter is applied in the feature space of ξ and its working principle is described below.
  First, analyze the covariance matrixΣ ∈ R 3 × 3 Σ∈\mathbb R^{3×3}SR3 × 3 eigenvalues ​​to detect which motions (or combinations of motions) are undetermined and which motions are fully constrained. In the eigenvector space, the velocityξ M t ξ^t_MXMtCompared with the previous time interval ξ t − 1 ξ^{t-1}XWeighting the velocity of t 1 , the new filtered velocity is obtained:
[ ( 1 + kl ) I + ke E ] ξ t = ξ M t + ( kl I + ke E ) ξ t − 1 (20) [(1 +k_l )I+k_eE]ξ^t=ξ^t_M+ (k_lI+k_eE)ξ^{t−1} \tag{20}[(1+kl)I+keE ] xt=XMt+(klI+keE ) xt1(20)

Among them, EEE is a diagonal matrix containing the eigenvalues​​kl k_lklandke k_eke k k k is the parameter of the filter. Specifically,kl k_lklApply a constant weight between the solver's solution and the previous estimate while defining the eigenvalues ​​ke k_ekeHow it affects the final estimate.
kl = 0.05 e − ( l − 1 ) , ke = 15 × 1 0 3 e − ( l − 1 ) (21) k_l= 0.05e^{−(l−1)}, k_e= 15×10^3e^ {−(l−1)}\tag{21}kl=0.05e(l1),ke=15×103 e(l1)(21)

Among them lll is the pyramid level, ranging from (coarsest) to the number of levels considered.

//	计算特征值和特征向量
SelfAdjointEigenSolver<MatrixXf> eigensolver(cov_odo);

//首先,我们必须在“特征向量”中描述新的线速度和角速度
Matrix<float,3,3> Bii;
Matrix<float,3,1> kai_b;
Bii = eigensolver.eigenvectors();  // 求解特征向量
kai_b = Bii.colPivHouseholderQr().solve(kai_loc_level); // 使用eigen库求解Bii*x=kai_loc_level

//其次,我们也必须在“特征向量”中描述旧的线速度和角速度
CMatrixFloat31 kai_loc_sub;

//重要提示:我们必须从以前的层数中减去这个结果
Matrix3f acu_trans;
acu_trans.setIdentity();
// 得到该层真实的变化,需要累加上之前的变化
for (unsigned int i=0; i<level; i++)
	acu_trans = transformations[i]*acu_trans;

kai_loc_sub(0) = -fps*acu_trans(0,2);
kai_loc_sub(1) = -fps*acu_trans(1,2);
if (acu_trans(0,0) > 1.f)
	kai_loc_sub(2) = 0.f;
else
	kai_loc_sub(2) = -fps*acos(acu_trans(0,0))*sign(acu_trans(1,0));
kai_loc_sub += kai_loc_old;

Matrix<float,3,1> kai_b_old;
kai_b_old = Bii.colPivHouseholderQr().solve(kai_loc_sub);

//Filter speed
const float cf = 15e3f*expf(-int(level)), df = 0.05f*expf(-int(level)); //  cf:ke df:kl

  Velocity estimates must be filtered at each level of the pyramid because each level may lack geometric features. However, the final velocity estimate cannot be used directly to filter the solution at each level, since all previous levels already estimate some motion that should be subtracted from it. The sequential steps performed by the filter at each level of the pyramid are as follows.

  1. Calculate the total speed estimate ξ it , acu ξ^{t,acu}_iXit , a c uAccumulate to current layer iii
  2. Estimate ξ t − 1 ξ^{t−1} from the final velocityXt 1 insideξ it , acu ξ^{t,acu}_iXit , a c uget it , sub ξ^{t,sub}_iXit,sub
  3. Calculate the covariance matrix Σ ξ , it Σ^t_{ξ ,i}Sξ , it
  4. Calculate the eigenvalue D it D^t_iDitand feature vector E it E^t_iEit
  5. In basis E it E^t_iEitrepresents the least squares solution ξ it , s ξ^{t,s}_iXit,sξ it , sub ξ^{t,sub}_iXit,sub
  6. ξ i , E t , s ξ^{t,s}_{i,E}Xi , Et,sξ i , E t , sub ξ^{t,sub}_{i,E}Xi , Et,subApplying (20) as input, we get ξ i , E t ξ^t_{i,E}Xi , Et
  7. ξ i , E t ξ^t_{i,E}Xi , EtTransformed into the camera coordinate system
    ξ t = ξ M t + ( kl I + ke E ) ξ t − 1 [ ( 1 + kl ) I + ke E ] (20) ξ^t={ξ^t_M+ (k_lI+k_eE )ξ^{t−1} \over [(1 +k_l)I+k_eE]} \tag{20}Xt=[(1+kl)I+keE]XMt+(klI+keE ) xt1(20)
Matrix<float,3,1> kai_b_fil;
for (unsigned int i=0; i<3; i++)
{
    
    
		kai_b_fil(i,0) = (kai_b(i,0) + (cf*eigensolver.eigenvalues()(i,0) + df)*kai_b_old(i,0))/(1.f + 
		    cf*eigensolver.eigenvalues()(i,0) + df);
		//kai_b_fil_f(i,0) = (1.f*kai_b(i,0) + 0.f*kai_b_old_f(i,0))/(1.0f + 0.f);
}

//将滤波后的速度转换为局部参考系并计算转换
Matrix<float,3,1> kai_loc_fil = Bii.inverse().colPivHouseholderQr().solve(kai_b_fil);

4.2.8 performWarping

  Linearization is suitable for small displacements between consecutive scans or with constant range gradients (which in the case of lidar arise in very unusual geometries: Archimedean spirals). To overcome this limitation, we estimate motion in a coarse-to-fine scheme, where coarser levels provide rough estimates that are subsequently refined at finer levels.

  Let R 0 R_0R0 R 1 R_1 R1It is two consecutive laser scans. Initially, by scanning the original R 0 R_0R0 R 1 R_1 R1Continuously downsample (usually 2) to create two Gaussian pyramids. Typically, a Gaussian mask should be used to downsample RGB or grayscale images, but in the rangecase of data, a standard Gaussian filter is not the best choice as it will produce artifacts in the filtered scan. As an alternative, we employ a bilateral filter, which does not blend distant points that may belong to different objects in the scene. Once the pyramid is built, the velocity estimation problem is solved iteratively from the coarsest to the finest levels. At each transition to a finer level, one of the two input scans must be warped relative to the other (ξ p ) (ξ_p) based on the overall velocity estimated in the previous level( xp)

  This deformation process is always divided into two steps, which in our formula is applied to the second scan R 1 R_1R1. First, use the rigid body motion ( ξ p ) (ξ_p) associated with torsion( xp) for each inR 1 R_1R1Observed point PPDefine the function:
( xwyw 1 ) = e ξ ^ p ( xy 1 ) , e ξ ^ p = ( 0 − ω pvx , p ω o 0 vy , p 0 0 0 ) (16) \begin{pmatrix} x ^w \\ y^w \\ 1 \\ end{pmatrix}= e^{\hat ξ_p} \begin{pmatrix} x \\ y \\ 1 \\ end{pmatrix},e^{\hat ξ_p} = \begin{pmatrix}0&-\omega_p&v_{x,p}\\\omega_o&0&v_{y,p}\\0&0&0\\end{pmatrix}\tag{ 16}xwyw1=eX^pxy1,eX^p=0oho0- Ohp00vx,pvand , p0(16)

Second, the transformed points must be reprojected to R 1 R_1R1to construct a twisted scan R 1 w R^w_1R1w, for convenience:
R 1 w ( aw ) = ( xw ) 2 + ( yw ) 2 (17) R^w_1(a^w) = \sqrt{(x^w)^2+(y^w)^2} \tag{17}R1w(aw)=(xw)2+(yw)2 (17) a w = k a a r c t a n ( y w x w ) + N − 1 2 = N − 1 F O V arctan ⁡ ( y w x w ) + N − 1 2 (18) a^w = k_a arctan({y^w\over x^w}) +{N-1 \over 2} = {N-1 \over FOV} \arctan({y^w\over x^w}) +{N-1 \over 2} \tag{18} aw=kaa r c t a n (xwyw)+2N1=FOVN1arctan (xwyw)+2N1(18)

//Transform point to the warped reference frame
const float x_w = acu_trans(0,0)*xx[image_level](j) + acu_trans(0,1)*yy[image_level](j) + acu_trans(0,2);
const float y_w = acu_trans(1,0)*xx[image_level](j) + acu_trans(1,1)*yy[image_level](j) + acu_trans(1,2);
const float tita_w = atan2(y_w, x_w);
const float range_w = sqrt(x_w*x_w + y_w*y_w);

//Calculate warping
const float uwarp = kdtita*(tita_w + 0.5*fovh);

  Several points may be warped to the same coordinates α w α^waw , in this case:

  1. The closest point is retained (other points will be occluded)
  2. Find the weighted average (code in the example)
//扭曲的像素(一般不是整数)对所有周围的像素都有贡献
if (( uwarp >= 0.f)&&( uwarp < cols_lim)){
    
    
	const int uwarp_l = uwarp;
	const int uwarp_r = uwarp_l + 1;
	const float delta_r = float(uwarp_r) - uwarp;
	const float delta_l = uwarp - float(uwarp_l);

	//1. 如果离整数很接近,那么就直接累加到该位置上
	if (abs(round(uwarp) - uwarp) < 0.05f){
    
    
		range_warped[image_level](round(uwarp)) += range_w;
		wacu(round(uwarp)) += 1.f; // 权重为1
	}
	else{
    
    
		// 2.否则加权平均到附近的两个像素上
		const float w_r = square(delta_l);
		range_warped[image_level](uwarp_r) += w_r*range_w;
		wacu(uwarp_r) += w_r; // 右边的权重为,距离右边整数的距离

		const float w_l = square(delta_r);
		range_warped[image_level](uwarp_l) += w_l*range_w;
		wacu(uwarp_l) += w_l;// 左边的权重为,距离左边整数的距离
	}
}

  If ξ p ξ_pXpConverging to the true speed, then the twisted sweep R 1 w R^w_1R1wwill be 1 R_1 than the original RR1Closer to the first scan R 0 R_0R0, which allows us to apply the linear approximation in (2) with finer resolution.


4.3 PoseUpdate

void CLaserOdometry2D::PoseUpdate()
{
    
    
	//First, compute the overall transformation
	//---------------------------------------------------
	Matrix3f acu_trans;
	acu_trans.setIdentity();
	for (unsigned int i=1; i<=ctf_levels; i++)
		acu_trans = transformations[i-1]*acu_trans;


	//				Compute kai_loc and kai_abs
	//--------------------------------------------------------
	kai_loc(0) = fps*acu_trans(0,2);
	kai_loc(1) = fps*acu_trans(1,2);
	if (acu_trans(0,0) > 1.f)
		kai_loc(2) = 0.f;
	else
		kai_loc(2) = fps*acos(acu_trans(0,0))*sign(acu_trans(1,0));

	//cout << endl << "Arc cos (incr tita): " << kai_loc(2);

	float phi = laser_pose.yaw();

	kai_abs(0) = kai_loc(0)*cos(phi) - kai_loc(1)*sin(phi);
	kai_abs(1) = kai_loc(0)*sin(phi) + kai_loc(1)*cos(phi);
	kai_abs(2) = kai_loc(2);


	//						Update poses
	//-------------------------------------------------------
	laser_oldpose = laser_pose;
	math::CMatrixDouble33 aux_acu = acu_trans;
	poses::CPose2D pose_aux_2D(acu_trans(0,2), acu_trans(1,2), kai_loc(2)/fps);
    laser_pose = laser_pose + pose_aux_2D;



	//				Compute kai_loc_old
	//-------------------------------------------------------
	phi = laser_pose.yaw();
	kai_loc_old(0) = kai_abs(0)*cos(phi) + kai_abs(1)*sin(phi);
	kai_loc_old(1) = -kai_abs(0)*sin(phi) + kai_abs(1)*cos(phi);
	kai_loc_old(2) = kai_abs(2);


    ROS_INFO("LASERodom = [%f %f %f]",laser_pose.x(),laser_pose.y(),laser_pose.yaw());


    // GET ROBOT POSE from LASER POSE
    //------------------------------  
    mrpt::poses::CPose3D LaserPoseOnTheRobot_inv;
    tf::StampedTransform transform;
    try
    {
    
    
        tf_listener.lookupTransform(last_scan.header.frame_id, "/base_link", ros::Time(0), transform);
    }
    catch (tf::TransformException &ex)
    {
    
    
        ROS_ERROR("%s",ex.what());
        ros::Duration(1.0).sleep();
    }

    //TF:transform -> mrpt::CPose3D (see mrpt-ros-bridge)
    const tf::Vector3 &t = transform.getOrigin();
    LaserPoseOnTheRobot_inv.x() = t[0];
    LaserPoseOnTheRobot_inv.y() = t[1];
    LaserPoseOnTheRobot_inv.z() = t[2];
    const tf::Matrix3x3 &basis = transform.getBasis();
    mrpt::math::CMatrixDouble33 R;
    for(int r = 0; r < 3; r++)
        for(int c = 0; c < 3; c++)
            R(r,c) = basis[r][c];
    LaserPoseOnTheRobot_inv.setRotationMatrix(R);

    //Compose Transformations
    robot_pose = laser_pose + LaserPoseOnTheRobot_inv;
    ROS_INFO("BASEodom = [%f %f %f]",robot_pose.x(),robot_pose.y(),robot_pose.yaw());


    // Estimate linear/angular speeds (mandatory for base_local_planner)
    // last_scan -> the last scan received
    // last_odom_time -> The time of the previous scan lasser used to estimate the pose
    //-------------------------------------------------------------------------------------
    double time_inc_sec = (last_scan.header.stamp - last_odom_time).toSec();
    last_odom_time = last_scan.header.stamp;
    double lin_speed = acu_trans(0,2) / time_inc_sec;
    //double lin_speed = sqrt( square(robot_oldpose.x()-robot_pose.x()) + square(robot_oldpose.y()-robot_pose.y()) )/time_inc_sec;
    double ang_inc = robot_pose.yaw() - robot_oldpose.yaw();
    if (ang_inc > 3.14159)
        ang_inc -= 2*3.14159;
    if (ang_inc < -3.14159)
        ang_inc += 2*3.14159;
    double ang_speed = ang_inc/time_inc_sec;
    robot_oldpose = robot_pose;
}

If you have any questions, please feel free to communicate with us. Coding is not easy. Please indicate the source when reprinting! Related SLAM blog posts will be updated in the future~

代码地址:
RF2O:
Github: Estimation of 2D odometry based on planar laser scans. Useful for mobile robots with innacurate base odometry.

SRF is a continuation of RF2O:
Github: Robust Planar Odometry Based on Symmetric Range Flow and Multi-Scan Alignment

Guess you like

Origin blog.csdn.net/qq_39266065/article/details/123585525