4. How to implement dense mapping in ORB-SLAM3 (2): How to control the three major threads of dense mapping and analysis of dense mapping code

Continuing from the previous chapter, we have completed the narration before entering the tracking thread.

Table of contents

1 Dense mapping changes in the tracking thread

2 Changes in dense mapping in the local mapping thread

3. Changes in dense mapping in the closed-loop detection thread

3.1 After correcting the closed loop for essential graph optimization

3.2 LoopClosing::MergeLocal after pure visual map fusion

3.3 LoopClosing::RunGlobalBundleAdjustment after global BA

4 Some APIs for dense mapping

4.0 Constructor

4.1 PointCloudMapping::insertKeyFrame inserts key frames

4.2 Dense mapping display thread PointCloudMapping::viewer

4.3 Map update PointCloudMapping::updatecloud

4.4 Generating dense point clouds from keyframes and depth maps PointCloudMapping::generatePointCloud


1 Dense mapping changes in the tracking thread

        Let’s review how to enter the tracking thread:

        First, ROS receives the RGB image and depth image information and performs a soft synchronization to send it into the GrabRGBD function:

        Second, in this function, the image in ROS format is converted into cv::Mat format and entered into the tracking thread through this function:

        Now let’s look at how dense mapping is performed in the tracking thread:

        We only need to change one thing in the tracking thread, that is, we perform dense reconstruction and only reconstruct key frames. We do not perform dense reconstruction for non-key frames, so we only need to change CreateNewKeyFrame function.

        If it is determined to add a key frame, give the RGB information and depth image information of this frame of image to the dense mapping thread: Section 4.1 has a detailed introduction to this function

    if(mpSystem->densemapping)
        mpPointCloudMapping->insertKeyFrame( pKF, this->mImRGB, this->mImDepth,vpKFs);

2 Changes in dense mapping in the local mapping thread

        There is only one change here, which is that the newly filtered key frames must not only be sent to the closed-loop detection thread, but also sent to the dense mapping queue.​ 

                // Step 9 如果距离IMU第一阶段初始化成功累计时间差小于100s,进行VIBA
                if ((mTinit<50.0f) && mbInertial)
                {
                    // Step 9.1 根据条件判断是否进行VIBA1(IMU第二次初始化)
                    // 条件:1、当前关键帧所在的地图还未完成IMU初始化---并且--------2、正常跟踪状态----------
                    if(mpCurrentKeyFrame->GetMap()->isImuInitialized() && mpTracker->mState==Tracking::OK) // Enter here everytime local-mapping is called
                    {
                        // 当前关键帧所在的地图还未完成VIBA 1
                        if(!mpCurrentKeyFrame->GetMap()->GetIniertialBA1()){
                            // 如果累计时间差大于5s,开始VIBA1(IMU第二阶段初始化)
                            if (mTinit>5.0f)
                            {
                                cout << "start VIBA 1" << endl;
                                mpCurrentKeyFrame->GetMap()->SetIniertialBA1();
                                if (mbMonocular)
                                    InitializeIMU(1.f, 1e5, true);
                                else
                                    InitializeIMU(1.f, 1e5, true);

                                cout << "end VIBA 1" << endl;
                            }
                        }
                        // Step 9.2 根据条件判断是否进行VIBA2(IMU第三次初始化)
                        // 当前关键帧所在的地图还未完成VIBA 2
                        else if(!mpCurrentKeyFrame->GetMap()->GetIniertialBA2()){
                            if (mTinit>15.0f){
                                cout << "start VIBA 2" << endl;
                                mpCurrentKeyFrame->GetMap()->SetIniertialBA2();
                                if (mbMonocular)
                                    InitializeIMU(0.f, 0.f, true);
                                else
                                    InitializeIMU(0.f, 0.f, true);

                                cout << "end VIBA 2" << endl;
                            }
                        }

                        // scale refinement
                        // Step 9.3 在关键帧小于100时,会在满足一定时间间隔后多次进行尺度、重力方向优化
                        if (((mpAtlas->KeyFramesInMap())<=200) &&
                                ((mTinit>25.0f && mTinit<25.5f)||
                                (mTinit>35.0f && mTinit<35.5f)||
                                (mTinit>45.0f && mTinit<45.5f)||
                                (mTinit>55.0f && mTinit<55.5f)||
                                (mTinit>65.0f && mTinit<65.5f)||
                                (mTinit>75.0f && mTinit<75.5f))){
                            if (mbMonocular)
                                // 使用了所有关键帧,但只优化尺度和重力方向以及速度和偏执(其实就是一切跟惯性相关的量)
                                ScaleRefinement();
                        }
                    }
                }
            }

#ifdef REGISTER_TIMES
            vdLBASync_ms.push_back(timeKFCulling_ms);
            vdKFCullingSync_ms.push_back(timeKFCulling_ms);
#endif
            // Step 10 将当前帧加入到闭环检测队列中
            mpLoopCloser->InsertKeyFrame(mpCurrentKeyFrame);
            for(auto pKF : mlNewKeyFrameForDenseMap)
                mpPointCloudMapping->insertKeyFrame(pKF);

3. Changes in dense mapping in the closed-loop detection thread

3.1 After correcting the closed loop for essential graph optimization

        After the loop has been detected and closed, the map has undergone major changes, so we need to update the content of dense mapping:

        In CorrectLoop:

        This step is required after completing the graph optimization

    if(pLoopMap->IsInertial() && pLoopMap->isImuInitialized())
    {
        Optimizer::OptimizeEssentialGraph4DoF(pLoopMap, mpLoopMatchedKF, mpCurrentKF, NonCorrectedSim3, CorrectedSim3, LoopConnections);
    }
    else
    {
        //cout << "Loop -> Scale correction: " << mg2oLoopScw.scale() << endl;
        // Step 7. 进行EssentialGraph优化,LoopConnections是形成闭环后新生成的连接关系,不包括步骤7中当前帧与闭环匹配帧之间的连接关系
        Optimizer::OptimizeEssentialGraph(pLoopMap, mpLoopMatchedKF, mpCurrentKF, NonCorrectedSim3, CorrectedSim3, LoopConnections, bFixedScale);
    }
#ifdef REGISTER_TIMES
    std::chrono::steady_clock::time_point time_EndOpt = std::chrono::steady_clock::now();

    double timeOptEss = std::chrono::duration_cast<std::chrono::duration<double,std::milli> >(time_EndOpt - time_EndFusion).count();
    vdLoopOptEss_ms.push_back(timeOptEss);
#endif

    mpAtlas->InformNewBigChange();


    // 创建一个新的线程对象并调用了名为 updatecloud 的函数。
    // 具体而言,它是针对 PointCloudMapping 类的 updatecloud 函数,使用了 mpPointCloudMapping 的实例,并传入了 mpCurrentKF->GetMap() 的引用作为参数。
    // 这段代码在多线程环境下更新点云数据。
    // 也就是说 : 我用一个线程指针mpPointCloudMapping调用其中的方法updatecloud其中传入一个地图参数为*mpCurrentKF->GetMap()
    mpPointCloudMapping->mabIsUpdating = false;  // 强制让已有的更新停止,进行新的
    mpThreadDML = new thread(&PointCloudMapping::updatecloud, mpPointCloudMapping, std::ref(*mpCurrentKF->GetMap()));
    // detach() 是线程对象的方法,它会将线程与其执行的函数分离,使得线程可以独立运行,不再与其所属的对象相关联。
    // 一旦调用了 detach(),该线程就会在后台运行,不再受到主线程的控制。
    // 通常,这样的操作可以避免在程序结束时出现资源释放的问题,因为线程会自行结束其生命周期。
    mpThreadDML->detach();
    cout << "Map updated!" << endl;

        Use the mabIsUpdating flag to inform the dense mapping thread not to do point cloud updates anymore. It is meaningless. Then start a new thread to perform dense mapping.       

        // Create a new thread object and call the function named updatecloud.
        // Specifically, it is for the updatecloud function of the PointCloudMapping class, using an instance of mpPointCloudMapping and passing in a reference to mpCurrentKF->GetMap() as a parameter.
        // This code updates point cloud data in a multi-threaded environment.
        // That is to say: I use a thread pointer mpPointCloudMapping to call the method updatecloud and pass in a map parameter of *mpCurrentKF->GetMap()

        // detach() is a method of the thread object. It will detach the thread from the function it executes, so that the thread can run independently and is no longer associated with the object to which it belongs.
        // Once detach() is called, the thread will run in the background and is no longer controlled by the main thread.
        // Usually, this operation can avoid the problem of resource release at the end of the program, because the thread will end its life cycle by itself.

3.2 LoopClosing::MergeLocal after pure visual map fusion

        After global BA, the map still needs to be updated. The update method is the same as before:

    mpPointCloudMapping->mabIsUpdating = false;  // 强制让已有的更新停止,进行新的
    mpThreadDML = new thread(&PointCloudMapping::updatecloud, mpPointCloudMapping, std::ref(*pMergeMap));
    mpThreadDML->detach();
    cout << "Map updated!" << endl;

3.3 LoopClosing::RunGlobalBundleAdjustment after global BA

        The map still needs to be optimized, because the camera pose also changes greatly after global BA:

/** 
 * @brief MergeLocal CorrectLoop 中调用
 * @param pActiveMap 当前地图
 * @param nLoopKF 检测到回环成功的关键帧,不是与之匹配的老关键帧
 */
void LoopClosing::RunGlobalBundleAdjustment(Map* pActiveMap, unsigned long nLoopKF)
{  
    Verbose::PrintMess("Starting Global Bundle Adjustment", Verbose::VERBOSITY_NORMAL);

#ifdef REGISTER_TIMES
    std::chrono::steady_clock::time_point time_StartFGBA = std::chrono::steady_clock::now();

    nFGBA_exec += 1;

    vnGBAKFs.push_back(pActiveMap->GetAllKeyFrames().size());
    vnGBAMPs.push_back(pActiveMap->GetAllMapPoints().size());
#endif

    // imu 初始化成功才返回true,只要一阶段成功就为true
    const bool bImuInit = pActiveMap->isImuInitialized();

    if(!bImuInit)
        Optimizer::GlobalBundleAdjustemnt(pActiveMap,10,&mbStopGBA,nLoopKF,false);
    else
        // 仅有一个地图且内部关键帧<200,并且IMU完成了第一阶段初始化后才会进行下面
        Optimizer::FullInertialBA(pActiveMap,7,false,nLoopKF,&mbStopGBA);

#ifdef REGISTER_TIMES
    std::chrono::steady_clock::time_point time_EndGBA = std::chrono::steady_clock::now();

    double timeGBA = std::chrono::duration_cast<std::chrono::duration<double,std::milli> >(time_EndGBA - time_StartFGBA).count();
    vdGBA_ms.push_back(timeGBA);

    if(mbStopGBA)
    {
        nFGBA_abort += 1;
    }
#endif
    // 记录GBA已经迭代次数,用来检查全局BA过程是否是因为意外结束的
    int idx =  mnFullBAIdx;
    // Optimizer::GlobalBundleAdjustemnt(mpMap,10,&mbStopGBA,nLoopKF,false);

    // Update all MapPoints and KeyFrames
    // Local Mapping was active during BA, that means that there might be new keyframes
    // not included in the Global BA and they are not consistent with the updated map.
    // We need to propagate the correction through the spanning tree
    {
        unique_lock<mutex> lock(mMutexGBA);
        if(idx!=mnFullBAIdx)
            return;

        if(!bImuInit && pActiveMap->isImuInitialized())
            return;

        if(!mbStopGBA)
        {
            Verbose::PrintMess("Global Bundle Adjustment finished", Verbose::VERBOSITY_NORMAL);
            Verbose::PrintMess("Updating map ...", Verbose::VERBOSITY_NORMAL);

            mpLocalMapper->RequestStop();
            // Wait until Local Mapping has effectively stopped

            while(!mpLocalMapper->isStopped() && !mpLocalMapper->isFinished())
            {
                usleep(1000);
            }

            // Get Map Mutex
            unique_lock<mutex> lock(pActiveMap->mMutexMapUpdate);
            // cout << "LC: Update Map Mutex adquired" << endl;

            //pActiveMap->PrintEssentialGraph();
            // Correct keyframes starting at map first keyframe
            list<KeyFrame*> lpKFtoCheck(pActiveMap->mvpKeyFrameOrigins.begin(),pActiveMap->mvpKeyFrameOrigins.end());

            // 通过树的方式更新未参与全局优化的关键帧,一个关键帧与其父节点的共视点数最多,所以选其作为参考帧
            while(!lpKFtoCheck.empty())
            {
                KeyFrame* pKF = lpKFtoCheck.front();
                const set<KeyFrame*> sChilds = pKF->GetChilds();
                //cout << "---Updating KF " << pKF->mnId << " with " << sChilds.size() << " childs" << endl;
                //cout << " KF mnBAGlobalForKF: " << pKF->mnBAGlobalForKF << endl;
                Sophus::SE3f Twc = pKF->GetPoseInverse();
                //cout << "Twc: " << Twc << endl;
                //cout << "GBA: Correct KeyFrames" << endl;
                // 广度优先搜索
                for(set<KeyFrame*>::const_iterator sit=sChilds.begin();sit!=sChilds.end();sit++)
                {
                    KeyFrame* pChild = *sit;
                    if(!pChild || pChild->isBad())
                        continue;

                    // 专门处理没有参与优化的新关键帧
                    if(pChild->mnBAGlobalForKF!=nLoopKF)
                    {
                        //cout << "++++New child with flag " << pChild->mnBAGlobalForKF << "; LoopKF: " << nLoopKF << endl;
                        //cout << " child id: " << pChild->mnId << endl;
                        Sophus::SE3f Tchildc = pChild->GetPose() * Twc;
                        //cout << "Child pose: " << Tchildc << endl;
                        //cout << "pKF->mTcwGBA: " << pKF->mTcwGBA << endl;
                        pChild->mTcwGBA = Tchildc * pKF->mTcwGBA;//*Tcorc*pKF->mTcwGBA;

                        Sophus::SO3f Rcor = pChild->mTcwGBA.so3().inverse() * pChild->GetPose().so3();
                        if(pChild->isVelocitySet()){
                            pChild->mVwbGBA = Rcor * pChild->GetVelocity();
                        }
                        else
                            Verbose::PrintMess("Child velocity empty!! ", Verbose::VERBOSITY_NORMAL);


                        //cout << "Child bias: " << pChild->GetImuBias() << endl;
                        pChild->mBiasGBA = pChild->GetImuBias();


                        pChild->mnBAGlobalForKF = nLoopKF;  // 标记成更新过的

                    }
                    lpKFtoCheck.push_back(pChild);
                }

                //cout << "-------Update pose" << endl;
                pKF->mTcwBefGBA = pKF->GetPose();
                //cout << "pKF->mTcwBefGBA: " << pKF->mTcwBefGBA << endl;
                pKF->SetPose(pKF->mTcwGBA);
                /*cv::Mat Tco_cn = pKF->mTcwBefGBA * pKF->mTcwGBA.inv();
                cv::Vec3d trasl = Tco_cn.rowRange(0,3).col(3);
                double dist = cv::norm(trasl);
                cout << "GBA: KF " << pKF->mnId << " had been moved " << dist << " meters" << endl;
                double desvX = 0;
                double desvY = 0;
                double desvZ = 0;
                if(pKF->mbHasHessian)
                {
                    cv::Mat hessianInv = pKF->mHessianPose.inv();

                    double covX = hessianInv.at<double>(3,3);
                    desvX = std::sqrt(covX);
                    double covY = hessianInv.at<double>(4,4);
                    desvY = std::sqrt(covY);
                    double covZ = hessianInv.at<double>(5,5);
                    desvZ = std::sqrt(covZ);
                    pKF->mbHasHessian = false;
                }
                if(dist > 1)
                {
                    cout << "--To much distance correction: It has " << pKF->GetConnectedKeyFrames().size() << " connected KFs" << endl;
                    cout << "--It has " << pKF->GetCovisiblesByWeight(80).size() << " connected KF with 80 common matches or more" << endl;
                    cout << "--It has " << pKF->GetCovisiblesByWeight(50).size() << " connected KF with 50 common matches or more" << endl;
                    cout << "--It has " << pKF->GetCovisiblesByWeight(20).size() << " connected KF with 20 common matches or more" << endl;

                    cout << "--STD in meters(x, y, z): " << desvX << ", " << desvY << ", " << desvZ << endl;


                    string strNameFile = pKF->mNameFile;
                    cv::Mat imLeft = cv::imread(strNameFile, CV_LOAD_IMAGE_UNCHANGED);

                    cv::cvtColor(imLeft, imLeft, CV_GRAY2BGR);

                    vector<MapPoint*> vpMapPointsKF = pKF->GetMapPointMatches();
                    int num_MPs = 0;
                    for(int i=0; i<vpMapPointsKF.size(); ++i)
                    {
                        if(!vpMapPointsKF[i] || vpMapPointsKF[i]->isBad())
                        {
                            continue;
                        }
                        num_MPs += 1;
                        string strNumOBs = to_string(vpMapPointsKF[i]->Observations());
                        cv::circle(imLeft, pKF->mvKeys[i].pt, 2, cv::Scalar(0, 255, 0));
                        cv::putText(imLeft, strNumOBs, pKF->mvKeys[i].pt, CV_FONT_HERSHEY_DUPLEX, 1, cv::Scalar(255, 0, 0));
                    }
                    cout << "--It has " << num_MPs << " MPs matched in the map" << endl;

                    string namefile = "./test_GBA/GBA_" + to_string(nLoopKF) + "_KF" + to_string(pKF->mnId) +"_D" + to_string(dist) +".png";
                    cv::imwrite(namefile, imLeft);
                }*/


                if(pKF->bImu)
                {
                    //cout << "-------Update inertial values" << endl;
                    pKF->mVwbBefGBA = pKF->GetVelocity();
                    //if (pKF->mVwbGBA.empty())
                    //    Verbose::PrintMess("pKF->mVwbGBA is empty", Verbose::VERBOSITY_NORMAL);

                    //assert(!pKF->mVwbGBA.empty());
                    pKF->SetVelocity(pKF->mVwbGBA);
                    pKF->SetNewBias(pKF->mBiasGBA);                    
                }

                lpKFtoCheck.pop_front();
            }

            //cout << "GBA: Correct MapPoints" << endl;
            // Correct MapPoints
            // 更新mp点
            const vector<MapPoint*> vpMPs = pActiveMap->GetAllMapPoints();

            for(size_t i=0; i<vpMPs.size(); i++)
            {
                MapPoint* pMP = vpMPs[i];

                if(pMP->isBad())
                    continue;
                
                // NOTICE 并不是所有的地图点都会直接参与到全局BA优化中,但是大部分的地图点需要根据全局BA优化后的结果来重新纠正自己的位姿
                // 如果这个地图点直接参与到了全局BA优化的过程,那么就直接重新设置器位姿即可
                if(pMP->mnBAGlobalForKF==nLoopKF)
                {
                    // If optimized by Global BA, just update
                    pMP->SetWorldPos(pMP->mPosGBA);
                }
                else  // 如故这个地图点并没有直接参与到全局BA优化的过程中,那么就使用器参考关键帧的新位姿来优化自己的位姿
                {
                    // Update according to the correction of its reference keyframe
                    // 说明这个关键帧,在前面的过程中也没有因为“当前关键帧”得到全局BA优化 
                    //? 可是,为什么会出现这种情况呢? 难道是因为这个地图点的参考关键帧设置成为了bad?
                    KeyFrame* pRefKF = pMP->GetReferenceKeyFrame();

                    if(pRefKF->mnBAGlobalForKF!=nLoopKF)
                        continue;

                    /*if(pRefKF->mTcwBefGBA.empty())
                        continue;*/

                    // Map to non-corrected camera
                    // cv::Mat Rcw = pRefKF->mTcwBefGBA.rowRange(0,3).colRange(0,3);
                    // cv::Mat tcw = pRefKF->mTcwBefGBA.rowRange(0,3).col(3);
                    // 转换到其参考关键帧相机坐标系下的坐标
                    Eigen::Vector3f Xc = pRefKF->mTcwBefGBA * pMP->GetWorldPos();

                    // Backproject using corrected camera
                    // 然后使用已经纠正过的参考关键帧的位姿,再将该地图点变换到世界坐标系下
                    pMP->SetWorldPos(pRefKF->GetPoseInverse() * Xc);
                }
            }

            pActiveMap->InformNewBigChange();
            pActiveMap->IncreaseChangeIndex();

            // TODO Check this update
            // mpTracker->UpdateFrameIMU(1.0f, mpTracker->GetLastKeyFrame()->GetImuBias(), mpTracker->GetLastKeyFrame());

            mpLocalMapper->Release();

#ifdef REGISTER_TIMES
            std::chrono::steady_clock::time_point time_EndUpdateMap = std::chrono::steady_clock::now();

            double timeUpdateMap = std::chrono::duration_cast<std::chrono::duration<double,std::milli> >(time_EndUpdateMap - time_EndGBA).count();
            vdUpdateMap_ms.push_back(timeUpdateMap);

            double timeFGBA = std::chrono::duration_cast<std::chrono::duration<double,std::milli> >(time_EndUpdateMap - time_StartFGBA).count();
            vdFGBATotal_ms.push_back(timeFGBA);
#endif
            Verbose::PrintMess("Map updated!", Verbose::VERBOSITY_NORMAL);

            mpPointCloudMapping->mabIsUpdating = false;  // 强制让已有的更新停止,进行新的
            mpThreadDML = new thread(&PointCloudMapping::updatecloud, mpPointCloudMapping, std::ref(*pActiveMap));
            mpThreadDML->detach();
            cout << "Map updated!" << endl;
        }

        mbFinishedGBA = true;
        mbRunningGBA = false;
    }
}

4 Some APIs for dense mapping

4.0 Constructor

    pcl::PointCloud<pcl::PointXYZRGBA> pcl_filter;
    pcl::PointCloud<pcl::PointXYZRGBA> pcl_local_filter;
    ros::Publisher pclPoint_pub;
    ros::Publisher pclPoint_local_pub;
    ros::Publisher octomap_pub;
    sensor_msgs::PointCloud2 pcl_point;
    sensor_msgs::PointCloud2 pcl_local_point;
    pcl::PointCloud<pcl::PointXYZRGBA> pcl_cloud_local_kf;
    pcl::PointCloud<pcl::PointXYZRGBA> pcl_cloud_kf;
    PointCloudMapping::PointCloudMapping(double resolution)
    {
        mResolution = resolution;
        mCx = 0;
        mCy = 0;
        mFx = 0;
        mFy = 0;
        mbShutdown = false;
        mbFinish = false;

        voxel.setLeafSize( resolution, resolution, resolution);
        statistical_filter.setMeanK(50);
        statistical_filter.setStddevMulThresh(1.0);

        mPointCloud = boost::make_shared<PointCloud>();
        viewerThread = std::make_shared<std::thread>(&PointCloudMapping::NormalshowPointCloud, this);  // make_unique是c++14的
    }

        Defined some ROS publishers and initialized the display. The display is described in detail in Section 4.2.

4.1 PointCloudMapping::insertKeyFrame inserts key frames

    /**
     * @brief 从追踪线程插入关键帧
     * @param kf 关键帧
     * @param color RGB图像
     * @param depth 深度图像
     * @param vpKFs 所有地图的所有的关键帧
     */
    void PointCloudMapping::insertKeyFrame(KeyFrame* kf, const cv::Mat& color, const cv::Mat& depth,vector<KeyFrame*> vpKFs)
    {
        unique_lock<mutex> locker(mKeyFrameMtx);
        cout << GREEN <<"receive a keyframe, id = "<<kf->mnId<<" 第"<<kf->mnId<<"个"<<endl;
        mvKeyFrames.push(kf);
        keyframes.push_back( kf );
        currentvpKFs = vpKFs;
        cv::Mat colorImg_ ,depth_;
        mvColorImgs.push( color.clone() );
        mvDepthImgs.push( depth.clone() );
        // std::condition_variable mKeyFrameUpdatedCond;
        // 这行代码是在使用条件变量(std::condition_variable)中的 notify_one() 方法。
        // 这个方法的作用是通知一个等待在这个条件变量上的线程,告诉它们某些条件已经发生变化,可能是在多线程环境下用于线程间的同步。
        // 当条件变量被通知时,等待在这个条件变量上的一个线程会被唤醒,继续执行。
        mKeyFrameUpdatedCond.notify_one();
    }

        The comments are very clear, just put some variables and send notifications.

4.2 Dense mapping display thread PointCloudMapping::viewer

        This process is ongoing....

        The main work is here:

        Generate a point cloud for each keyframe (Section 4.4) and convert it to the world coordinate system.

        Perform voxel filtering on it and display it.

/**
 * @brief 显示线程 
 */
void PointCloudMapping::viewer()
{
    pcl::visualization::CloudViewer viewer("viewer");
    while (1)
    {
        {
            unique_lock<mutex> lck_shutdown(shutDownMutex);
            if (shutDownFlag)
            {
                break;
            }
        }
        if (bStop || mabIsUpdating)
        {
            continue;
        }
        int N;
        std::list<KeyFrame *> lNewKeyFrames;
        {
            unique_lock<mutex> lck(keyframeMutex);
            N = mlNewKeyFrames.size();
            lNewKeyFrames = mlNewKeyFrames;
            if(N == 0)
                continue;
            else
            {
                mlNewKeyFrames.clear();
            }

        }

        double generatePointCloudTime = 0, transformPointCloudTime = 0; 
        for (auto pKF : lNewKeyFrames)
        {
            if (pKF->isBad())
                continue;
            generatePointCloud(pKF);

            pcl::PointCloud<pcl::PointXYZRGBA>::Ptr p(new pcl::PointCloud<pcl::PointXYZRGBA>);
            pcl::transformPointCloud(*(pKF->mptrPointCloud), *(p), Converter::toMatrix4d(pKF->GetPoseInverse()));

            {
                std::unique_lock<std::mutex> lck(mMutexGlobalMap);
                *globalMap += *p;
            }

        }
        pcl::PointCloud<pcl::PointXYZRGBA>::Ptr tmp(new pcl::PointCloud<pcl::PointXYZRGBA>);

        // 去除孤立点这个比较耗时,用处也不是很大,可以去掉
        // statistical_filter->setInputCloud(globalMap);  
        // statistical_filter->filter(*tmp);

        voxel->setInputCloud(globalMap);
        voxel->filter(*globalMap);

        
        viewer.showCloud(globalMap);  // 这个比较费时,建议不需要实时显示的可以屏蔽或改成几次显示一次
    }
}

4.3 Map update PointCloudMapping::updatecloud

/**
 * @brief 更新当前点云
 * @param curMap 当前地图
 */
void PointCloudMapping::updatecloud(Map &curMap)
{
    std::unique_lock<std::mutex> lck(updateMutex);
    
    mabIsUpdating = true;
     1.获取当前地图的所有关键帧
    currentvpKFs = curMap.GetAllKeyFrames();
    // loopbusy = true;
    cout << "开始点云更新" << endl;
    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr tmpGlobalMap(new pcl::PointCloud<pcl::PointXYZRGBA>);
    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr curPointCloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr tmpGlobalMapFilter(new pcl::PointCloud<pcl::PointXYZRGBA>());
     2.遍历每一个关键帧
    /// 这段代码的目的似乎是将当前关键帧的点云数据转换到世界坐标系下,然后将其合并到全局地图中,并对全局地图进行体素滤波处理,以最终更新或维护一个更精确的全局地图。
    for (int i = 0; i < currentvpKFs.size(); i++)
    {
        /// 2.1在闭环检测时如果地图有本质图优化则说明地图有较大改变,停止更新地图
        if (!mabIsUpdating)
        {
            std::cout << "中断点云更新" <<std::endl;
            return;
        }
        /// 2.2 如果这个关键帧没有被优化掉 且 这个关键帧生成了点云
        // 在PointCloudMapping::generatePointCloud中关键帧生成点云
        if (!currentvpKFs[i]->isBad() && currentvpKFs[i]->mptrPointCloud)
        {
            // 生成的点云是在相机坐标系下的,要转化成世界坐标系下
            pcl::transformPointCloud(
                *(currentvpKFs[i]->mptrPointCloud), *(curPointCloud),
                Converter::toMatrix4d(currentvpKFs[i]->GetPoseInverse()));
            *tmpGlobalMap += *curPointCloud;

            voxel->setInputCloud(tmpGlobalMap);
            voxel->filter(*tmpGlobalMapFilter);
            // 将 tmpGlobalMap 指向的点云数据和 tmpGlobalMapFilter 指向的点云数据进行交换。
            tmpGlobalMap->swap(*tmpGlobalMapFilter);
        }
    }
    cout << "点云更新完成" << endl;
    {
        std::unique_lock<std::mutex> lck(mMutexGlobalMap);
        globalMap = tmpGlobalMap;
    }
    mabIsUpdating = false;
}

        Update the current map. The calling scenario here is after the closed-loop detection thread performs global BA (Section 3.3). All keyframes are re-reconstructed based on the depth map.

4.4 Generating dense point clouds from keyframes and depth maps PointCloudMapping::generatePointCloud

        ​​​​ Here, a dense point cloud is generated from an RGB image and a depth map: a key frame is generated into a dense point cloud.

        ​​​​​Here we use loop expansion for multi-threaded calculations (if you don’t understand the principle, you can refer to: In-depth Understanding of Computer Systems Chapter 5 Optimization).

void PointCloudMapping::generatePointCloud(KeyFrame *kf) //,Eigen::Isometry3d T
{
    pcl::PointCloud<pcl::PointXYZRGBA>::Ptr pPointCloud(new pcl::PointCloud<pcl::PointXYZRGBA>);
    // point cloud is null ptr
    for (int m = 0; m < kf->imDepth.rows; m += 4)
    {
        for (int n = 0; n < kf->imDepth.cols; n += 4)
        {
            float d = kf->imDepth.ptr<float>(m)[n];
            if (d < 0.05 || d > 6)
                continue;
            pcl::PointXYZRGBA p;
            p.z = d;
            p.x = (n - kf->cx) * p.z / kf->fx;
            p.y = (m - kf->cy) * p.z / kf->fy;

            p.b = kf->imLeftRgb.ptr<uchar>(m)[n * 3];
            p.g = kf->imLeftRgb.ptr<uchar>(m)[n * 3 + 1];
            p.r = kf->imLeftRgb.ptr<uchar>(m)[n * 3 + 2];

            pPointCloud->points.push_back(p);
        }
    }
    pPointCloud->height = 1;
    pPointCloud->width = pPointCloud->points.size();
    pPointCloud->is_dense = true;
    kf->mptrPointCloud = pPointCloud;
}

Guess you like

Origin blog.csdn.net/qq_41694024/article/details/134676486