Hand-shred visual slam14 lecture ch13 code (5) binocular initialization StereoInit()

In the previous article, we analyzed the Frontend::AddFrame() function, which will run three different functions according to the front-end status variable FrontendStatus, StereoInit(), Track() and Reset(). The first step must be to initialize StereoInit( )function:

The steps for binocular initialization are also very simple: based on the optical flow matching between the left and right eyes, find map points that can be triangulated, and establish an initial map if successful:

  • First, use the DetectFeatures() function to extract GFTT feature points for the left eye.
  • Then use the FindFeaturesInRight() function to perform left and right gaze flow to match the left and right target feature points. Find the position of the corresponding feature point on the right based on the position of the left eye feature point.
  • If the number of matched feature points is greater than num_features_init_ (default 100), use the BuildInitMap() function to initialize the map. The feature points are triangulated based on the relative positions of the binoculars and the matching feature points, and the 3D position of the feature points is calculated.
  • After the map is initialized successfully, the front-end status is changed to TRACKING_GOOD, and the current frame image and map points are sent to the viewer_ thread for display.
//双目初始化
 //根据左右目之间的光流匹配,寻找可以三角化的地图点,成功时建立初始地图
bool Frontend::StereoInit(){
    //提取左目的GFTT特征点(数量)
    int num_features_left = DetectFeatures();
    // 右目光流追踪
    int num_coor_features=FindFeaturesInRight();

    //如果匹配到的特征点数量小于num_features_init_,默认100个,就返回false
    if (num_coor_features < num_features_init_)
    {
        return false;
    }

    //初始化地图
    bool build_map_success= BuildInitMap();

    //如果地图初始化成功就改前端状态为TRACKING_GOOD,并把当前帧和地图点传到viewer_线程里
    if (build_map_success)
    {
        status_=FrontendStatus::TRACKING_GOOD;
        //地图在可视化端的操作,添加当前帧并更新整个地图
        if (viewer_)
        {
            viewer_->AddCurrentFrame(current_frame_);
            viewer_->UpdateMap();
        }
        return true; //地图初始化成功,返回ture
    }
    return false; //地图初始化失败,返回false
}

 

Next, let’s take a closer look at the function implementation:

DetectFeatures() function:

  • The first step is to use a mask to uniformize feature extraction, which is to draw a 20x20 rectangular frame where feature points already exist, and no new feature points will be extracted within this range;
  • In the second step, based on the mask in the first step, new feature points are extracted;
  • Finally, push_back these feature points to the feature points of the current frame. Associate the current frame current_frame_ with the pixel position kp of the detected new feature point, and count the number of feature points detected twice.
/*提取左目的GFTT特征点(数量)
    //opencv中mask的作用就是创建感兴趣区域,即待处理的区域。
         通常,mask大小创建分为两步,先创建与原图一致,类型为CV_8UC1或者CV_8UC3的全零图(即黑色图)。如mask = Mat::zeros(image.size(),CV_8UC1); 
        然后用rect类或者fillPoly()函数将原图中待处理的区域(感兴趣区域)置为1。
*/
int Frontend::DetectFeatures(){
    cv::Mat mask(current_frame_->left_img_.size(),CV_8UC1, 255);
    //循环遍历当前帧上的左侧图像特征,并绘制边框
    for(auto feat : current_frame_->features_left_){
        /*
            void cv::rectangle(cv::InputOutputArray img, cv::Point pt1, cv::Point pt2, 
                                const cv::Scalar &color, int thickness = 1, int lineType = 8, int shift = 0)
            绘制一个简单的、厚的或向上填充的矩形。函数cv::rectangle绘制一个矩形轮廓或填充矩形,其两个相对的角分别为pt1和pt2。
            position_.pt————————>关键点坐标
            inline cv::Point2f::Point_(float _x, float _y)------>关键点上下左右10距离的矩形,color为0,就是黑色填充满
        */
       cv::rectangle(mask,feat->position_.pt-cv::Point2f(10,10),feat->position_.pt+cv::Point2f(10,10),0,CV_FILLED);
    }
    //建立一个关键点的vector
    std::vector<cv::KeyPoint>keypoints;

    //提取左图特征点 
    /*
        virtual void cv::Feature2D::detect(cv::InputArray image, std::vector<cv::KeyPoint> &keypoints, cv::InputArray mask = noArray())
        重载:
            检测图像(第一种变体)或图像集(第二种变体)中的关键点。
        参数:
            image – 图像.
            keypoints – The detected keypoints. In the second variant of the method keypoints[i] is a set of keypoints detected in images[i] .
                        检测到的关键点。在该方法的第二种变体中,keypoints[i]是在图像[i]中检测到的一组关键点。
            mask – Mask specifying where to look for keypoints (optional). It must be a 8-bit integer matrix with non-zero values in the region of interest.
                        指定在何处查找关键点的掩码(可选)。它必须是一个8位整数矩阵,在感兴趣的区域中具有非零值。
        GFTT角点
    */        
    gftt_->detect(current_frame_->left_img_,keypoints,mask);

    //关联current_frame_和kp,同时统计这两次检测到的特征点数量
    int cnt_detected = 0;
    for(auto &kp : keypoints){
        current_frame_->features_left_.push_back(
            Feature::Ptr(new Feature(current_frame_,kp))
        );
        cnt_detected++;
    }
    LOG(INFO) << "Detect " << cnt_detected << " new features";
    return cnt_detected;
}

FindFeaturesInRight() function:

  • The first step: assign an initial value. If the current feature point has a corresponding point on the map, then the pixel coordinates of the feature point in the current frame will be calculated inversely based on the 3D POSE of the feature point and the pose of the current frame. If there is no corresponding feature point, the initial value of the feature point of the right eye is the same as that of the left eye. This can speed up the calculation of optical flow and prevent optical flow from finding the wrong point to a certain extent.
  • Step 2: Call the LK optical flow method in OpenCV to track the position of the right eye feature points
  • Step 3: Push_back the matched feature points to the right feature point of the current frame. If there is no match, put a null pointer.
  • Finally, count the number of matching feature points.
// 右目光流追踪
int Frontend::FindFeaturesInRight(){
    //赋初值
    std::vector<cv::Point2f>kps_left,kps_right;
    //  这里把当前帧左目的特征点位置放到 kps_left 这个临时变量里面。
    //  如果当前特征点有在地图上有对应的点,那么将根据特征点的3D POSE和当前帧的位姿反求出特征点在当前帧的像素坐标。如果没有对应特征点,右目的特征点初值就是和左目一样。
    for(auto &kp : current_frame_->features_left_){
        kps_left.push_back(kp->position_.pt);
        auto mp =kp->map_point_.lock();//检查是否上锁
        // 如果当前特征点有在地图上有对应的点
        if (mp)
        {
             // use projected points as initial guess(使用投影点作为初始估计值)//tzy
            auto px=camera_right_->world2pixel(mp->Pos(),current_frame_->Pose());
            //存入右侧图像的关键点
            kps_right.push_back(cv::Point2f(px[0],px[1]));
        }
        //否则,使用左侧相机特征点坐标为初始值
        else{
            kps_right.push_back(kp->position_.pt);
        }
    }
    
    // 开始使用LK流来估计右侧图像中的点
    std::vector<uchar> status;
    Mat error;
        //opencv自带的光流跟踪函数
         /*
        CV_EXPORTS_W void calcOpticalFlowPyrLK( InputArray prevImg, InputArray nextImg,
                                        InputArray prevPts, InputOutputArray nextPts,
                                        OutputArray status, OutputArray err,
                                        Size winSize = Size(21,21), int maxLevel = 3,
                                        TermCriteria criteria = TermCriteria(TermCriteria::COUNT+TermCriteria::EPS, 30, 0.01),
                                        int flags = 0, double minEigThreshold = 1e-4 );
            Calculates an optical flow for a sparse feature set using the iterative Lucas-Kanade method with pyramids.
        */
    cv::calcOpticalFlowPyrLK(current_frame_->left_img_,
                             current_frame_->right_img_, kps_left, kps_right,
                             status, error,
                             cv::Size(11, 11), 3,
                             cv::TermCriteria(cv::TermCriteria::COUNT + cv::TermCriteria::EPS, 30, 0.01),
                             cv::OPTFLOW_USE_INITIAL_FLOW); // 最后一个参数flag,使用初始估计,存储在nextPts中;如果未设置标志,则将prevPts复制到nextPts并将其视为初始估计

    // 统计匹配上的特征点个数,并存储
    int num_good_pts;
    for (size_t i = 0; i < status.size(); ++i)
    {
        // 为真表示光流追踪成功
        if (status[i])
        {
            cv::KeyPoint kp(kps_right[i], 7); // 右目关键点的直径为7
            Feature::Ptr feat(new Feature(current_frame_, kp));
            feat->is_on_left_image_ = false;
            current_frame_->features_right_.push_back(feat);
            num_good_pts++;
        }
        else
        { // 光流追踪未成功,就放个空指针
            current_frame_->features_right_.push_back(nullptr);
        }
    }
    // 输出操作日志
    LOG(INFO) << "Find " << num_good_pts << " in the right image.";
    return num_good_pts; // 返回成功匹配的点对数量
}

BuildInitMap() function:

First, if the number of matched feature points in the previous step of optical flow tracking is greater than num_features_init_ (default 100), the map will be initialized:

  • The first step: Set the positions of the two cameras through the VisualOdometry::Init() function. When the system is initialized, the Frontend::SetCameras() function is called, and then in the Dataset::Init() function, there are settings for each camera. pose performs the actual assignment.
  • Step 2: Calculate the normalized coordinates of each feature point on the plane of the two cameras.
  • Step 3: Use the triangulation() function in the algorithm.h file to calculate the 3D position of the feature point in the world coordinate system (calculate the position of the feature point by constructing a linear equation and then decomposing it with SVD)
bool Frontend::BuildInitMap(){
    //设置左右两个相机的位置(获取pose)
    std::vector<SE3>poses{camera_left_->pose(),camera_right_->pose()};
    size_t cnt_init_landmarks= 0;    //设置地图标志初始值
    //循环左侧图像特征点 (获取 特征点归一化坐标)
    for (size_t i = 0; i < current_frame_->features_left_.size(); ++i)
    {
        //判断右侧图像是否有对应特征点,有则继续向下执行,没有就continue
        if (current_frame_->features_right_[i]== nullptr)    continue;
        // create map point from triangulation  (开始获取三角化所需的 特征点归一化坐标)
        std::vector<Vec3>points{
            camera_left_->pixel2camera(Vec2(current_frame_->features_left_[i]->position_.pt.x,current_frame_->features_left_[i]->position_.pt.y)),
            camera_right_->pixel2camera(   Vec2(current_frame_->features_right_[i]->position_.pt.x,current_frame_->features_right_[i]->position_.pt.y))};
        //新建一个三维0矩阵
        Vec3 pworld =Vec3::Zero();

        // 尝试对每一对匹配点进行三角化
        /*
            inline bool myslam::triangulation(const std::vector<SE3> &poses, std::vector<Vec3> points, Vec3 &pt_world)
            linear triangulation with SVD 线性三角测量
            参数:
                poses – poses,
                points – points in normalized plane
                pt_world – triangulated point in the world
            返回:
                true if success
        */
        if (triangulation(poses,points,pworld) &&pworld[2]>0)
        {
            //创建地图存储数据
            //创建一个新地图,用于信息更新
            auto new_map_point = MapPoint::CreateNewMappoint();
            new_map_point->SetPos(pworld);
        
             //将观测到的特征点加入新地图
             new_map_point->AddObservation(current_frame_->features_left_[i]);
             new_map_point->AddObservation(current_frame_->features_right_[i]);

             //当前帧的地图点指向新的地图点—>更新当前帧上的地图点
            current_frame_->features_left_[i]->map_point_ = new_map_point;
            current_frame_->features_right_[i]->map_point_ = new_map_point;

             //初始化地图点makr+1
             cnt_init_landmarks++;
             //在地图中插入当前新的更新点
             map_->InsertMapPoint(new_map_point);
        }
    }
    //把初始化的这一帧设置为关键帧
    current_frame_->SetKeyFrame();
    //在地图中插入关键帧
    map_->InsertKeyFrame(current_frame_);

    //后端更新地图(在后端更新)
    backend_->UpdateMap();

    //输出操作日志
    LOG(INFO) << "Initial map created with " << cnt_init_landmarks
              << " map points";
    //返回:单个图像构建初始地图成功
    return true;
}

Among them triangulation() triangulation function:

During the derivation process of the implementation, you can take a look at the SVD decomposition part of this blog.

// 通过构建线性方程,然后SVD分解来计算特征点的位置。
/**
 * linear triangulation with SVD
 * @param poses     poses,
 * @param points    points in normalized plane
 * @param pt_world  triangulated point in the world
 * @return true if success
 */

inline bool triangulation(const std::vector<SE3> &poses,
                   const std::vector<Vec3> points, Vec3 &pt_world) {
    MatXX A(2 * poses.size(), 4);
    VecX b(2 * poses.size());
    b.setZero();
    for (size_t i = 0; i < poses.size(); ++i) {
        Mat34 m = poses[i].matrix3x4();
        A.block<1, 4>(2 * i, 0) = points[i][0] * m.row(2) - m.row(0);
        A.block<1, 4>(2 * i + 1, 0) = points[i][1] * m.row(2) - m.row(1);
    }
    // Eigen::ComputeThinU和Eigen::ComputeThinV是两个参数,用于指定只计算矩阵的前k个奇异向量,其中k是矩阵的秩。这样可以加快计算速度并减小内存占用
    auto svd = A.bdcSvd(Eigen::ComputeThinU | Eigen::ComputeThinV);
    /*
    取SVD分解得到v矩阵的最有一列作为解:svd.matrixV().col(3)
    深度值是svd.matrixV()(3, 3)
    head<3>() 是取前三个值
    */
    pt_world = (svd.matrixV().col(3) / svd.matrixV()(3, 3)).head<3>();

    if (svd.singularValues()[3] / svd.singularValues()[2] < 1e-2) {
        // 解质量不好,放弃
        return true;
    }
    return false;
}

Guess you like

Origin blog.csdn.net/weixin_62952541/article/details/133172976