Ubuntu20.04 installation and configuration to run DynaSLAM

Ubuntu20.04 installation and configuration to run DynaSLAM


DynaSLAM combines Mask_RCNN and multi-view geometry to remove dynamic features on the basis of ORB-SLAM2, so configure ORB-SLAM2 and ORB-SLAM3 operating environment in Ubuntu 20.04 + ROS to run ORB-SLAM2 in real time + Gazebo simulation to run ORB-SLAM2 + various related Configure and run DynaSLAM in the basic environment of library installation

1. Install Anaconda

Enter Anaconda official website , click Download to download (Anaconda will download the corresponding version according to the system used to access the webpage, for example, what I downloaded here is Anaconda3-2023.03-Linux-x86_64.sh)

insert image description here
Install Anaconda

bash Anaconda3-2023.03-Linux-x86_64.sh

(1) Check the installation agreement, press Enter until it appears Do you accept the license terms? [yes|no], enter yesto continue the installation;
(2) After entering yes, you will be prompted to confirm the installation location, click here Enterto default;
(3) Initialize Anaconda, this step only needs to follow the prompts Enter yes;

insert image description here

insert image description here

Restart the terminal to enter the conda basic environment, follow the prompts, if you want the conda basic environment not to be activated when starting the terminal, set auto_activate_basethe parameter to false:

conda config --set auto_activate_base false

If you want to enter the conda base environment later, you only need to use the conda command to activate:

conda activate base

insert image description here

Conda common commands:

  • Create a conda environment
conda create --name 环境名 包名(多个包名用空格分隔)
# 例如:conda create --name my_env python=3.7 numpy pandas scipy
  • Activate (switch) the conda environment
conda activate 环境名
# 例如:conda activate bas
  • Show created conda environment
conda info --envs
# 或者:conda info -e,亦或者conda env list
  • Delete the specified conda environment,
# 通过环境名删除
conda remove --name 要删除的环境名 --all

# 通过指定环境文件位置删除(这个方法可以删除不同位置的同名环境)
conda remove -p 要删除的环境所在位置 --all
# 例如:conda remove -p /home/zard/anaconda3/envs/MaskRCNN --all

2. Installation dependencies

(1) Install the boost library

sudo apt-get install libboost-all-dev

(2) Pangolin, OpenCV2 or 3 and Eigen3 installation reference: Ubuntu 20.04 configuration ORB-SLAM2 and ORB-SLAM3 operating environment + ROS real-time running ORB-SLAM + Gazebo simulation running ORB-SLAM2 + installation of various related libraries, this article Eigen3.4.0 and OpenCV3.4.5 are installed in it, and then they are installed based on them

3. Configure the Mask_RCNN environment

Configure in Anaconda virtual environment

# 创建一个虚拟环境
conda create -n MaskRCNN python=2.7
conda activate MaskRCNN
# 这一步可能报错,多尝试几次,可能会成功(非常玄学,可能是网络的问题)
pip install tensorflow==1.14.0
pip install keras==2.0.9
# 这一步可能提示numpy,pillow版本过低,升级numpy和pillow
# sudo pip install numpy==x.x.x
# sudo pip install pillow==x.x.x
pip install scikit-image
pip install pycocotools

Download DynaSLAM and test the environment

git clone  https://github.com/BertaBescos/DynaSLAM.git
cd DynaSLAM
python src/python/Check.py

If output:

Mask R-CNN is correctly working

you can go to the next step

4. Install DynaSLAM

Download the mask_rcnn_coco.h5 file and copy it to DynaSLAM/src/python/the
insert image description here
Dynaslam source code. Some are only suitable for opencv2.4. For the opencv3.4.5 we installed before, we need to modify the dynaslam source code, and in order to avoid segment errors, we also need to modify some content. Refer to the article: About running DynaSLAM source code (OpenCV3.x version)

4.1 Modify CMakeLists.txt

(1) DynaSLAM/CMakeLists.txtmedium

set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 ")
# set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
# set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall   -O3 -march=native")
......................
#find_package(OpenCV 2.4.11 QUIET)
#if(NOT OpenCV_FOUND)
#    message("OpenCV > 2.4.11 not found.")
#    find_package(OpenCV 3.0 QUIET)
#    if(NOT OpenCV_FOUND)
#        message(FATAL_ERROR "OpenCV > 3.0 not found.")
#    endif()
#endif()

find_package(OpenCV 3.4 QUIET)
if(NOT OpenCV_FOUND)
    find_package(OpenCV 2.4 QUIET)
    if(NOT OpenCV_FOUND)
        message(FATAL_ERROR "OpenCV > 2.4.x not found.")
    endif()
endif()
......................
set(Python_ADDITIONAL_VERSIONS "2.7")
#This is to avoid detecting python 3
find_package(PythonLibs 2.7 EXACT REQUIRED)
if (NOT PythonLibs_FOUND)
    message(FATAL_ERROR "PYTHON LIBS not found.")
else()
    message("PYTHON LIBS were found!")
    message("PYTHON LIBS DIRECTORY: " ${PYTHON_LIBRARY} ${PYTHON_INCLUDE_DIRS})
endif()
......................
#find_package(Eigen3 3.1.0 REQUIRED)
find_package(Eigen3 3 REQUIRED)

(2) DynaSLAM/Thirdparty/DBoW/CMakeLists.txtmedium

#set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 -march=native ")
#set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall  -O3 -march=native")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS}  -Wall  -O3 ")
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wall  -O3 ")
......................
# find_package(OpenCV 3.0 QUIET)
find_package(OpenCV 3.4 QUIET)

(3) DynaSLAM/Thirdparty/g2o/CMakeLists.txtmedium

#SET(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -O3 -march=native") 
#SET(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -O3 -march=native")
SET(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -O3 ") 
SET(CMAKE_C_FLAGS_RELEASE "${CMAKE_C_FLAGS_RELEASE} -O3 ")
......................
#FIND_PACKAGE(Eigen3 3.1.0 REQUIRED)
FIND_PACKAGE(Eigen3 3 REQUIRED)

4.2 Modify the source code

(1)include/Conversion.h

// cv::Mat toMat(const PyObject* o);
   cv::Mat toMat(PyObject* o);

(2)src/Conversion.cc

/**
 * This file is part of DynaSLAM.
 * Copyright (C) 2018 Berta Bescos <bbescos at unizar dot es> (University of Zaragoza)
 * For more information see <https://github.com/bertabescos/DynaSLAM>.
 *
 */

#include "Conversion.h"
#include <iostream>

namespace DynaSLAM
{
    
    

    static void init()
    {
    
    
        import_array();
    }

    static int failmsg(const char *fmt, ...)
    {
    
    
        char str[1000];

        va_list ap;
        va_start(ap, fmt);
        vsnprintf(str, sizeof(str), fmt, ap);
        va_end(ap);

        PyErr_SetString(PyExc_TypeError, str);
        return 0;
    }

    class PyAllowThreads
    {
    
    
    public:
        PyAllowThreads() : _state(PyEval_SaveThread()) {
    
    }
        ~PyAllowThreads()
        {
    
    
            PyEval_RestoreThread(_state);
        }

    private:
        PyThreadState *_state;
    };

    class PyEnsureGIL
    {
    
    
    public:
        PyEnsureGIL() : _state(PyGILState_Ensure()) {
    
    }
        ~PyEnsureGIL()
        {
    
    
            // std::cout << "releasing"<< std::endl;
            PyGILState_Release(_state);
        }

    private:
        PyGILState_STATE _state;
    };

    using namespace cv;

    static PyObject *failmsgp(const char *fmt, ...)
    {
    
    
        char str[1000];

        va_list ap;
        va_start(ap, fmt);
        vsnprintf(str, sizeof(str), fmt, ap);
        va_end(ap);

        PyErr_SetString(PyExc_TypeError, str);
        return 0;
    }

    class NumpyAllocator : public MatAllocator
    {
    
    
    public:
#if (CV_MAJOR_VERSION < 3)
        NumpyAllocator()
        {
    
    
        }
        ~NumpyAllocator() {
    
    }

        void allocate(int dims, const int *sizes, int type, int *&refcount,
                      uchar *&datastart, uchar *&data, size_t *step)
        {
    
    

            // PyEnsureGIL gil;

            int depth = CV_MAT_DEPTH(type);
            int cn = CV_MAT_CN(type);

            const int f = (int)(sizeof(size_t) / 8);
            int typenum = depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE
                                                   : depth == CV_16U  ? NPY_USHORT
                                                   : depth == CV_16S  ? NPY_SHORT
                                                   : depth == CV_32S  ? NPY_INT
                                                   : depth == CV_32F  ? NPY_FLOAT
                                                   : depth == CV_64F  ? NPY_DOUBLE
                                                                      : f * NPY_ULONGLONG + (f ^ 1) * NPY_UINT;
            int i;

            npy_intp _sizes[CV_MAX_DIM + 1];
            for (i = 0; i < dims; i++)
            {
    
    
                _sizes[i] = sizes[i];
            }

            if (cn > 1)
            {
    
    
                _sizes[dims++] = cn;
            }
            PyObject *o = PyArray_SimpleNew(dims, _sizes, typenum);
            if (!o)
            {
    
    

                CV_Error_(CV_StsError, ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims));
            }
            refcount = refcountFromPyObject(o);

            npy_intp *_strides = PyArray_STRIDES(o);
            for (i = 0; i < dims - (cn > 1); i++)
                step[i] = (size_t)_strides[i];

            datastart = data = (uchar *)PyArray_DATA(o);
        }

        void deallocate(int *refcount, uchar *, uchar *)
        {
    
    
            // PyEnsureGIL gil;
            if (!refcount)
                return;
            PyObject *o = pyObjectFromRefcount(refcount);
            Py_INCREF(o);
            Py_DECREF(o);
        }
#else

        NumpyAllocator()
        {
    
    
            stdAllocator = Mat::getStdAllocator();
        }
        ~NumpyAllocator()
        {
    
    
        }

        UMatData *allocate(PyObject *o, int dims, const int *sizes, int type,
                           size_t *step) const
        {
    
    
            UMatData *u = new UMatData(this);
            u->data = u->origdata = (uchar *)PyArray_DATA((PyArrayObject *)o);
            npy_intp *_strides = PyArray_STRIDES((PyArrayObject *)o);
            for (int i = 0; i < dims - 1; i++)
                step[i] = (size_t)_strides[i];
            step[dims - 1] = CV_ELEM_SIZE(type);
            u->size = sizes[0] * step[0];
            u->userdata = o;
            return u;
        }

        UMatData *allocate(int dims0, const int *sizes, int type, void *data,
                           size_t *step, int flags, UMatUsageFlags usageFlags) const
        {
    
    
            if (data != 0)
            {
    
    
                CV_Error(Error::StsAssert, "The data should normally be NULL!");
                // probably this is safe to do in such extreme case
                return stdAllocator->allocate(dims0, sizes, type, data, step, flags,
                                              usageFlags);
            }
            PyEnsureGIL gil;

            int depth = CV_MAT_DEPTH(type);
            int cn = CV_MAT_CN(type);
            const int f = (int)(sizeof(size_t) / 8);
            int typenum =
                depth == CV_8U ? NPY_UBYTE : depth == CV_8S ? NPY_BYTE
                                         : depth == CV_16U  ? NPY_USHORT
                                         : depth == CV_16S  ? NPY_SHORT
                                         : depth == CV_32S  ? NPY_INT
                                         : depth == CV_32F  ? NPY_FLOAT
                                         : depth == CV_64F  ? NPY_DOUBLE
                                                            : f * NPY_ULONGLONG + (f ^ 1) * NPY_UINT;
            int i, dims = dims0;
            cv::AutoBuffer<npy_intp> _sizes(dims + 1);
            for (i = 0; i < dims; i++)
                _sizes[i] = sizes[i];
            if (cn > 1)
                _sizes[dims++] = cn;
            PyObject *o = PyArray_SimpleNew(dims, _sizes, typenum);
            if (!o)
                CV_Error_(Error::StsError,
                          ("The numpy array of typenum=%d, ndims=%d can not be created", typenum, dims));
            return allocate(o, dims0, sizes, type, step);
        }

        bool allocate(UMatData *u, int accessFlags,
                      UMatUsageFlags usageFlags) const
        {
    
    
            return stdAllocator->allocate(u, accessFlags, usageFlags);
        }

        void deallocate(UMatData *u) const
        {
    
    
            if (u)
            {
    
    
                PyEnsureGIL gil;
                PyObject *o = (PyObject *)u->userdata;
                Py_XDECREF(o);
                delete u;
            }
        }

        const MatAllocator *stdAllocator;
#endif
    };

    NumpyAllocator g_numpyAllocator;

    NDArrayConverter::NDArrayConverter() {
    
     init(); }

    void NDArrayConverter::init()
    {
    
    
        import_array();
    }

    cv::Mat NDArrayConverter::toMat(PyObject *o)
    {
    
    
        cv::Mat m;

        if (!o || o == Py_None)
        {
    
    
            if (!m.data)
                m.allocator = &g_numpyAllocator;
        }

        if (!PyArray_Check(o))
        {
    
    
            failmsg("toMat: Object is not a numpy array");
        }

        int typenum = PyArray_TYPE(o);
        int type = typenum == NPY_UBYTE ? CV_8U : typenum == NPY_BYTE                     ? CV_8S
                                              : typenum == NPY_USHORT                     ? CV_16U
                                              : typenum == NPY_SHORT                      ? CV_16S
                                              : typenum == NPY_INT || typenum == NPY_LONG ? CV_32S
                                              : typenum == NPY_FLOAT                      ? CV_32F
                                              : typenum == NPY_DOUBLE                     ? CV_64F
                                                                                          : -1;

        if (type < 0)
        {
    
    
            failmsg("toMat: Data type = %d is not supported", typenum);
        }

        int ndims = PyArray_NDIM(o);

        if (ndims >= CV_MAX_DIM)
        {
    
    
            failmsg("toMat: Dimensionality (=%d) is too high", ndims);
        }

        int size[CV_MAX_DIM + 1];
        size_t step[CV_MAX_DIM + 1], elemsize = CV_ELEM_SIZE1(type);
        const npy_intp *_sizes = PyArray_DIMS(o);
        const npy_intp *_strides = PyArray_STRIDES(o);
        bool transposed = false;

        for (int i = 0; i < ndims; i++)
        {
    
    
            size[i] = (int)_sizes[i];
            step[i] = (size_t)_strides[i];
        }

        if (ndims == 0 || step[ndims - 1] > elemsize)
        {
    
    
            size[ndims] = 1;
            step[ndims] = elemsize;
            ndims++;
        }

        if (ndims >= 2 && step[0] < step[1])
        {
    
    
            std::swap(size[0], size[1]);
            std::swap(step[0], step[1]);
            transposed = true;
        }

        if (ndims == 3 && size[2] <= CV_CN_MAX && step[1] == elemsize * size[2])
        {
    
    
            ndims--;
            type |= CV_MAKETYPE(0, size[2]);
        }

        if (ndims > 2)
        {
    
    
            failmsg("toMat: Object has more than 2 dimensions");
        }

        m = Mat(ndims, size, type, PyArray_DATA(o), step);

        if (m.data)
        {
    
    
#if (CV_MAJOR_VERSION < 3)
            m.refcount = refcountFromPyObject(o);
            m.addref(); // protect the original numpy array from deallocation
                        // (since Mat destructor will decrement the reference counter)
#else
            m.u = g_numpyAllocator.allocate(o, ndims, size, type, step);
            m.addref();
            Py_INCREF(o);
            // m.u->refcount = *refcountFromPyObject(o);
#endif
        };
        m.allocator = &g_numpyAllocator;

        if (transposed)
        {
    
    
            Mat tmp;
            tmp.allocator = &g_numpyAllocator;
            transpose(m, tmp);
            m = tmp;
        }
        return m;
    }

    PyObject *NDArrayConverter::toNDArray(const cv::Mat &m)
    {
    
    
        if (!m.data)
            Py_RETURN_NONE;
        Mat temp;
        Mat *p = (Mat *)&m;
#if (CV_MAJOR_VERSION < 3)
        if (!p->refcount || p->allocator != &g_numpyAllocator)
        {
    
    
            temp.allocator = &g_numpyAllocator;
            m.copyTo(temp);
            p = &temp;
        }
        p->addref();
        return pyObjectFromRefcount(p->refcount);
#else
        if (!p->u || p->allocator != &g_numpyAllocator)
        {
    
    
            temp.allocator = &g_numpyAllocator;
            m.copyTo(temp);
            p = &temp;
        }
        // p->addref();
        // return pyObjectFromRefcount(&p->u->refcount);
        PyObject *o = (PyObject *)p->u->userdata;
        Py_INCREF(o);
        return o;
#endif
    }
}

4.3 Compile DynaSLAM

conda activate MaskRCNN
cd DynaSLAM
chmod +x build.sh
./build.sh
  • If the master branch does not have the mono_carla.cc file, it needs to be commented out
# add_executable(mono_carla
# Examples/Monocular/mono_carla.cc)
# target_link_libraries(mono_carla ${PROJECT_NAME})
  • Error fatal error: ndarrayobject.h: No such file or director, numpy is installed in the virtual environment and Python3, but the python2 that comes with Ubuntu is not installed, but pip cannot be used because it will be installed in python3, so
sudo apt-get install python-numpy
  • Error: error: static assertion failed: std::map must have the same value_type as its allocator. It’s an old problem. I wrote it in the article where I installed ORB, and put it in the include/LoopClosing.h file in the ORB-SLAM2 source code directory of
// typedef map<KeyFrame*,g2o::Sim3,std::less<KeyFrame*>,
//        Eigen::aligned_allocator<std::pair<const KeyFrame*, g2o::Sim3> > > KeyFrameAndPose;
//修改为:
typedef map<KeyFrame*,g2o::Sim3,std::less<KeyFrame*>,
        Eigen::aligned_allocator<std::pair<KeyFrame *const, g2o::Sim3> > > KeyFrameAndPose;

run:

./Examples/RGB-D/rgbd_tum Vocabulary/ORBvoc.txt Examples/RGB-D/TUM3.yaml /XXX/tum_dataset/ /XXX/tum_dataset/associations.txt (path_to_masks) (path_to_output)

Not giving the latter two parameters is equivalent to running ORB-SLAM2. If you only want to use the function of MaskRCNN but do not want to save the mask in path_to_masks, write it as no_save, otherwise give a folder address to save the Mask.

insert image description here

The CPU (my 12th generation i5) can't run at all, it's stuck, but the positioning effect for dynamic data is indeed much better:
insert image description here

Guess you like

Origin blog.csdn.net/zardforever123/article/details/130064101