CMU-Autonomous exploration and navigation system (TARE & FAR Planner) learning-All in one

References

Paper and code address

Part Ⅰ Exploration and Planning Algorithms

1 Introduction

1.1 Robot Exploration Background

  • An unknown environment that is full of danger or has been polluted , and is not suitable for humans to enter
  • Complex, messy 3D environments

Causal Dilemma Problem Between Robot Navigation and SLAM

  • The robot can complete point-to-point movement by itself through the navigation system, but its realization is highly dependent on the global map . SLAM can build a global map , but it needs to manually traverse the entire environment by hand, remote control , driving, etc. Once the operator is removed, the application of the robot will be limited by the "chicken and egg" causal dilemma between navigation and SLAM
  • The Autonomous Exploration system is born to solve such problems. With the help of exploration algorithms, the robot can autonomously traverse the unknown environment and build a complete map without human intervention . From the performance point of view, the autonomous exploration system actually automates the process of SLAM manually traversing the environment, so it is also included in the category of active SLAM (Active-SLAM)
  • Some planners (such as FAR Planner ) that can respond quickly to the local environment can also achieve autonomous navigation in unknown environments , but still need to manually give the target point Goalpoints

1.2 Application example of robot exploration technology

  • In military scenarios : relatively common
  • In Civil/Industrial Scenarios: Taking the Fukushima Nuclear Accident as an Example
    • Environmental background : water into generators, reactor meltdown, hydrogen explosion, radioactive pollution
    • Technology Applications : Radioactive Decontamination, Agricultural Restoration, Neighborhood Reconstruction

1.3 Exploration problem and task description

  • Problem Description

    • Robots perceive the environment from discrete/sparse viewpoints

      • The viewpoint can be regarded as a point in space . The system needs to send itself to a viewpoint, collect a frame of data of the surrounding environment at this point, and then go to the next viewpoint to collect the next frame of data
      • The robot exploration problem can be modeled as : there is an unknown environment, and a series of viewpoints need to be placed in this environment, and then the system is allowed to walk to these viewpoints in turn and collect a frame of sensor data. The sum of these sensor data is constitutes the coverage of this unknown environment
    • Solve a path through the viewpoint to achieve coverage of the environment (coverage)

  • system task

    • Update the environment representation (environment representation)

      • Track covered and uncovered areas in the surrounding environment
      • Analyze walkable spaces

        Among them, the exploration of unmanned vehicles is more complicated than that of drones, because it also includes terrain traversability analysis (TTA) on the ground , but at the same time, drones are more constrained by load and battery life.

      • Validate the viewpoint and link to the environment representation
    • Evaluate the environmental coverage/information gain of the viewpoint

      • What parts or areas of the environment can be seen by placing the system on a viewpoint
    • Find an optimal exploration path

1.4 Difficulties in robot exploration

  • Exploration is a continuous iterative process between "environment representation update (covered area)" and "travelable path finding (covering more area)", so continuous computing resources are required
    insert image description here

  • Exploration is more difficult when the environment size is relatively large, complex, cluttered and 3D terrain

1.5 Disadvantages of Existing Exploration Algorithms

  • greedy strategy

    • The system checks which part of the environment has the highest score in a certain place, and then places a viewpoint at this place, and at the same time sends the system to this viewpoint and collects a frame of data; then sees which part of the environment has the highest score, continue Place a viewpoint in this place, and then send the system to the viewpoint to continue collecting a frame of data, and so on.
    • Often only one step or two or three steps are planned ( only short-term goals are optimized ), and there is no overall grasp of the environment (long-term goals) , which will cause the system to go back and forth in the environment , and it is easy to walk repeatedly in the same place . low efficiency
  • The data structure is not efficient enough

    • The system needs to spend a lot of time to calculate the exploration path (the exploration algorithm is an online algorithm, the system needs to obtain the exploration path uninterruptedly, and the system will execute the next action only after obtaining the latest exploration path), so the calculation of the exploration path takes The length of time determines the efficiency of the exploration algorithm
    • Existing algorithms need to maintain a global map . In this way, with the expansion of the robot's exploration range, the calculation speed of the algorithm will slow down due to the need to maintain a larger map, and eventually the robot needs to stop and wait for the calculation to complete before continuing to move. This kind of stop and go further reduces the exploration efficiency

2. TARE (Technologies for Autonomous Robot Exploration): A Hierarchical Framework for Exploring Complex 3D Environments

  • First calculate the global exploration path, and then use the global exploration path to guide the calculation of the local exploration path
  • In this way, in the process of robot exploration, the route it travels will be more purposeful due to the overall guidance , such as exploring this room before exploring the next room, instead of going where the unknown area is relatively large.
  • Global Horizon

    • Expressed in a low-resolution (sparse) environment , the green box/body in the figure is the place where information is stored in the global environment (the place that is not covered in the global environment, at a certain point in the future, the system needs to go to each The position of the green box/body to cover the local environment), and calculate a rough global exploration path based on these green boxes/body
  • Local Horizon

    • Use a high-resolution (dense) environment representation , use a relatively large amount of calculations to carefully calculate the viewpoint (yellow point in Figure 1, blue point in Figure 2), and then calculate a fine local exploration path after getting these viewpoints . And let the system drive along this local exploration path, so that the main computing resources can be concentrated in the space closer to the robot , instead of being wasted in places with greater uncertainty in the distance

insert image description here
insert image description here

2.1 Local level planning

Update environment representation and calculate traversable space

  • Implementation
    • Adopt random sampling viewpoint method, then solve a traveling salesman problem (Traveling Salesman Problem, TSP) , corresponding to get a traveling salesman tour ( Tour ) route ( through each viewpoint, determine the order of the robot to explore the destination ); repeated iteration Do this, and finally keep a set of optimal viewpoints and corresponding traveling salesman tour routes. This optimal traveling salesman tour route is the local path (local path) that will eventually be sent to the system for execution

      Traveling Salesman Problem: Given a list of cities and the distance between each pair of cities, find the shortest path that visits each city once and returns to the starting city

insert image description here

Sensor coverage needs to be adjusted to account for field of view overlap

  • smooth path
    • Fit one or more path segments through viewpoint, each path segment is dynamically feasible
    • Among them, the solid orange point is the sampling viewpoint , and the hollow orange point is the break-point (break-point)
    • The system does not need to stop when passing the sampling viewpoint but passes through at a constant speed (continuous and smooth) , but the system will stop at each breakpoint (continuous but not smooth) , and then adjust the direction in place, then move on to the next path segment

insert image description here

2.2 Global level planning

Build sparse roadmaps by traversing space

  • Implementation
    • When there is a small green box/body, it also solves a Traveling Salesman Problem (TSP) and gets a traveling salesman route, which is the final global path , and then sends it to the local level to guide the local path calculate

      Store information in small green squares/volumes, representing uncovered areas in the global environment ( mostly at intersections )

insert image description here

2.3 Actual case

  • Figure 2 below involves the problem of relocation: the system transfers itself to another place, and this transfer process is called relocation
    • Why do relocation?
      • When the environment is complex and convoluted, such a situation often occurs: the local area near the system has been completely covered, and the system needs to efficiently guide itself to another place
      • In Figure 2 below, when the system reaches the dead end in the upper left corner , the local environment has been completely covered at this time, and then guide itself out to an intersection , and then select a branch road to continue to complete the coverage

insert image description here

insert image description here

  • complete partial coverage
    • Local paths reduce to shortest paths to neighboring blocks
    • Implicitly convert between "explore" and "relocate"
    • When all the parts are covered, the problem will degenerate. At this time, the local path obtained is the shortest path (picture 2 below), and then guide oneself to the next green square/body, and continue to explore the uncovered area.

insert image description here

insert image description here

2.4 Details of Algorithm Implementation

  • Generalized Surface
    • Defined as the boundary between free space and non-free space
    • Including the surface of the object and the "boundary ( frontiers , that is, the two hypotenuses in the figure below )"
      • The traditional exploration algorithm will extract frontiers in the environment, and then the system will explore in the direction of frontiers

insert image description here

  • Maximize Surface Coverage
    • If the sensor senses that a surface is covered within a certain distance and angle
      • The system will maximize the sensor’s coverage of the object’s surface (Surface Points) : In the local area, the shortest path that allows the sensor to “see the whole and look good” is found by looping through the viewpoint (Viewpoint). The blue curve in the figure below represents this a track. Orange circles represent the sampled viewpoints. The red curve represents the points on the surface of the object that need to be "covered" by the sensor

insert image description here

  • In reality, the distance is more important than the angle

insert image description here

2.5 Takeaways (important conclusions)

  • Exploration requires a balance between "local coverage" and "global relocation to cover more"
  • An efficient representation of the environment is needed to keep track of computed coverage/uncovered areas and search paths
  • Local data density is important, and data density away from the robot can be traded off for computational efficiency
    • It is necessary to maintain a high-resolution local exploration path in the local area near the system
    • Get a rough global exploration path with less computation in the global scope
  • Optimizing the global exploration path is better than the greedy strategy , because the latter has the problem of redundant revisiting

3. FAR Planner: Fast, Heuristic Path Planning Algorithm Using Dynamic Visibility Updates

The exploration problem mentioned earlier is an environmental coverage problem : put a system in an unknown environment, and the system will cover the environment autonomously. The FAR Planner is to send the system to a certain goal point. This goal point (Goal point) is artificially given by us . The system calculates a series of paths (path) through a certain way.

insert image description here

3.1 Common Path Planning Algorithms

insert image description here

  • Discretization-based search : A*, D*
  • Sampling based : RRT

    The performance of the above two methods is still good when dealing with known environments, but when dealing with unknown environments, it is necessary to fumble all the way to reach the target point . Walking back and forth (redundant revisit)

  • Feature-Based : such as Visibility Graph, which generates a road map ( Roadmap ) based on the extraction of environmental features and geometric relationship analysis , and then plans on the generated Roadmap. This method can generate the theoretical shortest path
    • Compared with the randomness of RRT , which makes the generated path graph highly overlapping (Redundence) and dense (Dense) , it is difficult to maintain dynamically. The Visibility Graph generated based on environmental characteristics has fewer nodes , which can be changed when the environment changes, or In an unknown environment, fast and dynamic updates are performed as the robot moves , which also minimizes the computational complexity of path planning
    • However, the method based on environmental characteristics is difficult to be applied in practice because of its high requirements for environmental input , which limits the development of this method.

3.2 FAR(Fast, Attemptable Route) Planner

  • Definition and Features

    • An algorithm that can extract the geometric features of the environment (obstacles) in real time and dynamically establish a Visibility Graph for path planning and navigation in a real environment. This algorithm supports path graphs and navigation paths based on real-time sensor input in environments without known maps Real-time dynamic adjustment , global path planning and dynamic adjustment within 300m environment can be realized within 1-2ms
  • Implementation

    • Polygon extraction and registration of obstacles

      • First, input the sensor point cloud data
      • Then, a binary image is created from the input point cloud and an average filtering method is applied to produce a blurred image
      • Then extract polygon outlines from these fuzzy images with dense vertices (below a: black pixels are passable, white pixels are obstacles, and other different colors represent different polygons)
      • Finally, register the extracted polygon (red in b below) with the sensor point cloud data (white in b below)
        insert image description hereinsert image description here
    • Dynamic update of two layers of visual views

      • For an unknown environment, the algorithm processes each frame of sensor input: extracting obstacle polygon features, and establishing a local V-Graph by detecting mutual visibility between corner points for global V-Graph update
      • In this way, the calculation amount of establishing a large-scale global V-Graph will be dispersed , and dynamic updates in unknown environments will be realized, so that the planner can perform exploratory path planning in unknown environments
        insert image description here
        insert image description here
    • Planning on V-Graph

      • Dynamic obstacle detection and path update : By comparing the multi-frame input of the sensor to detect dynamic obstacles in the environment (such as: pedestrians, vehicles), the algorithm will update the V-Graph that is blocked by dynamic obstacles, and dynamically After the obstacle is removed, restore the connection between the corresponding corner points in the previous V-Graph
      • Attemptable path planning in an unknown environment : When the navigation/goal point (Goal Point) is in an unknown environment without a prior map (Prior Map), the algorithm will plan multiple Feasible paths to try and update based on the latest environmental information
    • Extended to 3D multi-layer V-Graph

      • An extended 3D version adapted to UAV planning methods models the environment as multiple horizontal slices and extracts multi-layer polygons . As shown in the image below: Visibility edges (cyan) span multiple layers of polygons (red). Further, search for a path (blue) on the 3D multilayer visualization between the vehicle (coordinate system) and the target (red dot)
        insert image description here
  • Application Scenario

    • Fast Path Planning and Navigation in Known Environments

      • Quickly establish V-Graph and perform path planning in a known environment, such as: in a factory with known environmental information, autonomous robot path planning and navigation in a distribution center , at the same time, the system can respond to the real-time input environment and dynamic obstacles that appear The known map is updated and the navigation path is updated in real time. According to different task requirements, users can set a higher-level task planning algorithm based on this planning algorithm , generate timing navigation points combined with tasks , realize and process task requirements in different scenarios, and quickly deploy to the real robot platform
    • Oriented Exploration and Navigation in Unknown Environments

      • In some search and rescue missions, it is necessary to search for a specific area in an unknown environment. The algorithm can plan multiple feasible paths based on the current known environmental obstacle information , and it will be continuously updated during the process of exploring to the target position. Optimize the path, establish a global path map with the exploration area , provide environmental information for subsequent robot navigation, and directly provide navigation paths for other subsequent robots or personnel to reach the target area
    • Machine Learning and Simulation Training

      • The algorithm can generate navigation paths based on environmental features, which can be used to generate a large amount of navigation and dynamic environment data for network training. Combining the Matterport3D model and the AI ​​Habitat Engine rendering engine, it provides a variety of sensor outputs, such as RGB maps, depth maps, semantic maps, and LiDAR point clouds, which can be, for example, state estimation based on lidar or visual perception, local obstacle avoidance, and global Sim-to-Real network training such as path planning and exploration provides learning data sources. For example, based on training network-based visual navigation and obstacle avoidance strategy (Visual Planning Policy), training end-to-end localization and state estimation network (End-to-End Localization network)

Part Ⅱ System, Applications, and Extensions

1 Introduction

1.1 Simulation environment

  • CARLA Simulator 2017
    insert image description here

  • AirSim Simulator 2017
    insert image description here

1.2 Comparison of real data set (such as KITTI) and simulation environment

  • Advantages of simulation environment
    • All ground truth can be easily obtained, such as: vehicle pose, semantic information, etc.
    • Convenient simulation of environmental changes, such as lighting, weather, dynamic obstacles, real accidents, etc.
    • Autonomous stack (full system) testing based on closed-loop control loops (datasets often only test individual modules of the system, etc.)

1.3 Autonomy Stack

  • Generic Navigation Algorithm Stack
    • Allows users to develop advanced algorithms for simulation and testing in closed-loop control loops
    • Allows users to easily port the autonomous stack to the on-board computer
    • Support for joint research in robotics and computer vision
    • Extended application with ground platform

2. Robot intelligent system platform

2.1 Robot Intelligent System Platform Architecture

  • Top-Layer: upper planner
    • TARE exploration algorithm and FAR Planner path planner
  • Mid-Layer: Development Environment & Utilities
    • Development Environment
      • Simulation environment : indoor environment, campus, parking lot, tunnel and forest
      • Autonomous navigation algorithms : Collision Avoidance, Terrain Traversability Analysis, waypoint Following
    • Utilities : Visualization Tools and Joystick-based Debugging
    • Common interface with the upper layer algorithm : waypoint , the upper layer only needs to send the waypoint to the simulation environment to realize communication
    • Common interface with the underlying algorithm : cmd_vel , the command issued contains the linear velocity and angular velocity of the robot
  • Bottom-Layer: State Estimation and Motion Control
    • Provides interfaces with SOTA SLAM (such as A-LOAM, LeGO-LOAM, LIO-SAM, FAST-LIO2, etc.)

insert image description here

insert image description here

2.2 Upper layer algorithm (Top-Layer)

2.2.1 TARE Exploration Planner
  • A Two-Layer Hierarchical Framework for Efficient Exploration of Large and Complex Environments
    • Local layer: use dense map representation to plan a fine local exploration trajectory to cover the environment around the robot
    • Global layer: use sparse map representation to plan a rough global exploration trajectory covering the entire environment , and then use this global exploration trajectory to guide the local exploration trajectory
      insert image description here
2.2.2 FAR Planner global path planner
  • features
    • Visibility graph - based path planner that can work in known or unknown environments, and can change paths ( re-route ) in large-scale environments
    • One thread dynamically maintains a Visibility graph ( continuously running in the background and updating the map) while the robot is moving , and another thread can search for a feasible path to the target point within 3ms on this visualization

insert image description here

2.3 Mid-Layer algorithm (Mid-Layer)

  • Middle layer: block diagram of autonomous navigation system

insert image description here

2.3.1 Simulation environment model
  • Indoor environments, campuses, parking lots, tunnels and forests

insert image description here

2.3.2 Collision Avoidance
  • Local obstacle avoidance (the main content of Local Planner)
    corresponding paper: Falco: Fast Likelihood-based Collision Avoidance with Extension to Human-guided Navigation

    • The main idea of ​​the algorithm is to let the calculation be done offline as much as possible

    • effect

      • Most of the trajectories output by the upper-layer algorithm are collision-free trajectories , but in reality, the trajectories output by the upper-layer algorithm are based on relatively rough maps (some details are ignored), so it is impossible to guarantee 100% collision-free during actual operation. Therefore, the local planner is required to output the trajectory according to the fine map
      • Separate the upper-level algorithm from other parts, no need to output instructions to the underlying robot, but only need to send a waypoint, and the rest is handed over to the local planner to output instructions to the robot for execution
    • Implementation

      • First, use Matlab to generate an offline trajectory library to simulate the trajectory that the robot may walk in the future
      • Then, for the space covered by all trajectories, calculate the probability that all points (at a certain resolution) inside it will collide with all trajectories
      • After such offline calculation, the corresponding relationship between 3D points and trajectories in a space can be obtained
      • During real-time operation, once there is an obstacle at a certain point in space, it is immediately known which trajectories will be affected, and the probability of selecting these trajectories as the final path will be reduced
    • Model the paths as Monte Carlo samples and choose the set of paths with the highest probability of reaching the goal
    • Because a large number of calculations are carried out offline , when running online, only a collision-free trajectory needs to be selected in real time . The algorithm can plan a collision-free and closest path to the target point within a few milliseconds
    • example
      • All the yellow lines are the local collision-free paths around the robot at the current moment ( generated offline by Matlab ), and the paths that will cause collisions have been removed because of the influence of white obstacles
      • In the actual autonomous navigation process, the obstacle avoidance module receives the waypoint provided by the upper algorithm to guide its driving

insert image description here

2.3.3 Terrain Traversability Analysis
  • method

    • Superimpose and rasterize the map around the robot , then calculate the height of the ground corresponding to each grid , then calculate the height of all points contained in these grids relative to the ground and finally output, and distinguish the ground from obstacles based on these heights (The red dots in the figure below represent impassable areas, and the areas covered by green dots represent passable areas)
  • Output: two topographic maps

    • 10m x 10m local topographic map (high resolution)

      • Provided for local obstacle avoidance (path planning)
    • 40m x 40m terrain map (low resolution)

      • Provided to the top-level planning algorithm to use
  • Negative obstacle detection

    • Usually LiDAR cannot see these negative obstacles ( pots, holes, downhill, etc.)
    • Directly treat it as a place that the robot cannot see, and set it as impassable
    • This function is regulated by a parameter in the system , which can be turned on or off

insert image description here

2.3.4 Waypoint Following
  • All waypoints use the local planner to plan the trajectory to reach this point, so these waypoints are relatively close to the robot and are in an accessible area
  • If the robot needs to reach a relatively far place during the actual operation , an upper-level planner is needed at this time to split the waypoint into many small target points and guide the robot to go there step by step.

insert image description here

2.4 Bottom-Layer

  • Compatible with laser SLAM algorithms : LOAM, A-LOAM, LeGO-LOAM, LIO-SAM, LIO-mapping, FAST-LIO2, Faster-LIO

  • Integrated configuration: Take LIO-SAM as an example (there is a link to the complete configuration file network disk below)

    • stateEstimationTopic = /lio_sam/mapping/odometry
    • registeredScanTopic = /lio_sam/mapping/cloud_registered
    • flipStateEstimation = false
    • flipRegisteredScan = false
    <!-- 文件路径: \autonomous_exploration_development_environment\src\loam_interface\launch\loam_interface.launch -->
    <launch>
      <node pkg="loam_interface" type="loamInterface" name="loamInterface" output="screen" required="true">
        <param name="stateEstimationTopic" type="string" value="/lio_sam/mapping/odometry" />
        <param name="registeredScanTopic" type="string" value="/lio_sam/mapping/cloud_registered" />
        <!-- 不同的slam算法发布的odometry以及registered cloud坐标系是不一样的,比如loam发布的点云坐标系是map_rot
            与我们使用loam_interface里发布出来的registered_cloud的坐标系map是差了一个旋转变换的
            这时候就需要flipRegiateredCloud设置为true,但是也有别的slam算法发布出来的点云和loaminterface发布的
            点云坐标系一致,那就不需要转换,参数就是false -->
        <param name="flipStateEstimation" type="bool" value="false" />
        <param name="flipRegisteredScan" type="bool" value="false" />
        <param name="sendTF" type="bool" value="true" />
        <param name="reverseTF" type="bool" value="false" />
      </node>
    </launch>
    

    state_estimation_setup_notes

    • Link: https://pan.baidu.com/s/1Z3c6xwHN-Xwzom-_PQ5bTg?pwd=25xc
    • Extract code: 25xc
  • Precautions

    • LiDAR can be installed horizontally or at a certain angle
    • However, it is not recommended to rotate the LiDAR , because when an obstacle suddenly appears, the response time may be longer due to the slower update frequency , resulting in collisions and other phenomena

insert image description here

2.5 Utilities

2.5.1 Visualization Tools
  • Selective display in rviz: complete point cloud map (light white point) , detected area point cloud map (blue point) , robot trajectory (color curve) , detection area volume, driving distance, algorithm running time, these data will be automatically stored in a file

insert image description here

2.5.2 Joystick-based Debugging
  • Purpose : To separate the upper-level planner from other parts, replace the upper-level planner with human instructions , and also retain the functions of the middle layer and the bottom layer, so the robot can still achieve obstacle avoidance at this time
  • effect
    • It is convenient to debug and debug the middle layer and the bottom layer without considering the interference of the upper layer
  • Allows the user to easily switch between manual, smart joystick and autonomous modes
  • Compatible with PS3/4, Xbox and virtual joystick

insert image description here

2.6 Autonomous Mobile Platform Integration

  • Full support for differential mobile platforms
  • Omni-directional mobile platforms are partially supported, but the lateral movement ability will be ignored and only used as differential mobile platforms
  • Does not support automotive mobile platforms based on Ackerman steering (mainly cannot be supported in narrow environments )

insert image description here

3. Application of autonomous navigation system

3.1 DAPPA Underground Challenge

  • Involving tunnels, underground urban spaces, and caves
  • Behavior execution and multi-robot collaboration are expanded under the framework of the autonomous navigation system (yellow frame in the figure below)
  • Because there are high-level/global path planning/exploration algorithms in the system that can handle the loopback problem , the underlying integration does not need to provide SLAM/state estimation algorithms with loopback (because the local obstacle avoidance module and TTA module only consider the proximity of the robot) area information)

insert image description here

  • The autonomous navigation system in the competition uses the TARE exploration algorithm and the FAR path planning algorithm
    • The blue point is the starting point (start), and end represents the end point of the remaining area to be explored by the robot
    • Example: The robot first reaches point A through the TARE exploration algorithm, and then the operator sends a target point at point B, so the robot switches to the FAR path planning algorithm to fumble to reach point B, and then switches to the TARE exploration algorithm to complete the rest after arriving at point B exploration task

insert image description here

insert image description here

3.2 Mattport3D and AI Habitat

  • Autonomous navigation system supports development and application in simulation environment
  • Mattport3D is a collection of a series of simulation models , including 90 models built from real rooms, the appearance is very realistic, and these models also contain semantic information , which can render RGB, depth and semantic images in real time through the AI ​​Habitat simulation engine (this The three types of images will not be used in the autonomous navigation system)

insert image description here

4. System expansion application example

4.1 Self-Supervised Learning for Vision/LiDAR Navigation

  • Run this set of autonomous navigation algorithms in the Mattport3D and AI Habitat simulation environment (providing three images of RGB, depth and semantics), and combine the output of the navigation system with these three images to train the neural network . After training in theory, the neural network The output of the navigation algorithm can be simulated only based on the image output . The advantage of this is that it can get rid of the dependence on LiDAR (because the navigation algorithm needs LiDAR for precise positioning , but after training the neural network, theoretically only one camera is needed to give robot to achieve a similar effect )
  • Can support simulation to real ( simulation to real ) adaptation
    insert image description here

4.2 Mixed initiative navigation with human interaction

  • Leveraging Matterport 3D house models with semantic truth
  • Navigate using an autonomous navigation system to generate training data

insert image description here

4.3 Social navigation in crowded environments

  • Gazebo configuration needs to be modified to insert pedestrian models
  • The true value of pedestrians can simplify some problems in simulation

4.4 Application of Agricultural Robots

Application of FAR Planner in Agricultural Robot

  • The orchard scene was built in the simulation environment, first simulated in the simulation environment, and then used in the actual scene

insert image description here

5. Simulation Demo and interface introduction

  • Development Environment simulation interface
    • RegScan
      • Indicates the range that the sensor LiDAR can see when the robot is stationary ( registered scan )
    • overallMap
      • To provide a global map for people to see, that is, the white outline background in Figure 1 below ( robots/sensors do not know this global map )
    • ExploredArea
      • Indicates the area that has been explored and is being explored ( the blue area in Figure 1 below )
    • Trajectory
      • Indicates the trajectory of the robot ( the colored curve in Figure 1 below )
    • TerrainMap
      • Represents a 10×10m local topographic map (high resolution), which is provided for local obstacle avoidance (path planning), as shown in Figure 2 below
    • TerrainMapExt
      • Represents a 40x40m topographic map (low resolution), which is provided to the top-level planning algorithm, as shown in Figure 3 below ( by setting a height threshold relative to the ground to determine whether it is possible to pass, where red is an obstacle, green is passable ground )

insert image description here

insert image description here

insert image description here

  • FAR Planner simulation interface
    • Artificially given a Goalpoint , you can continue to try (Attemptable) and trial and error in an unknown environment. When you lead yourself to a wrong path, you need a quick replan, get an optimal path based on the known information and implement
    • save read
      • Save the V-Graph that has been explored and read the previously saved V-Graph
    • Planning Attemptable
      • If it is unchecked, the system will only plan the path of the known environment by default (it will not try unknown areas), and will automatically switch to planning in Attemptable mode when the Goalpoint is placed in an unknown area
    • Update Visibility Graph
      • The algorithm will provide the global V-Graph ( the .vgh file in the data folder ) that has been explored in each environment , just read it in and quickly perform planning and navigation operations. At this point, you can uncheck the Update Visibility Graph (if the environment is dynamic, you need to check it to achieve the update)

insert image description here

Part Ⅲ Algorithms Installation and Operation

pre-information

  • 支持 Ubuntu 18.04 with ROS Melodic & Ubuntu 20.04 with ROS Noetic
  • Currently only supports AMD64 architecture, does not support ARM architecture

1. Autonomous Exploration Development Environment

# 1. 安装依赖
$ sudo apt update
$ sudo apt install libusb-dev

# 2. 克隆源码
$ git clone https://github.com/HongbiaoZ/autonomous_exploration_development_environment.git

# 3. 分支检查与编译安装
$ cd autonomous_exploration_development_environment
$ git checkout melodic # Ubuntu 20.04 则将 melodic 替换为 noetic
$ catkin_make

# 4. 通过脚本安装仿真环境
$ ./src/vehicle_simulator/mesh/download_environments.sh

# 5. 启动开发环境
$ source devel/setup.sh
$ roslaunch vehicle_simulator system_xxx.launch
# xxx 可替换为对应的 5 种环境之一:campus、forest、garage、indoor、tunnel
# 案例:发送一系列 waypoint,同时发送导航边界和速度,车辆在跟踪 waypoint 的同时在边界内行驶(另开一个终端)
$ cd autonomous_exploration_development_environment
$ source devel/setup.sh
$ roslaunch waypoint_example waypoint_example_garage.launch

2. TARE(Technologies for Autonomous Robot Exploration)

# 1. 克隆源码
$ git clone https://github.com/caochao39/tare_planner.git

# 2. 编译安装
$ cd tare_planner
$ catkin_make

# 3. 启动开发环境
$ cd ~/autonomous_exploration_development_environment
$ source devel/setup.sh
$ roslaunch vehicle_simulator system_xxx.launch
# xxx 可替换为对应的 5 种环境之一:campus、forest、garage、indoor、tunnel
# 4. 启动 TARE(另开一个终端)
$ cd ~/tare_planner
$ source devel/setup.sh
$ roslaunch tare_planner explore_xxx.launch
# xxx 可替换为对应的 5 种环境之一:campus、forest、garage、indoor、tunnel

3. FAR Planner

# 1. 克隆源码
$ git clone https://github.com/MichaelFYang/far_planner

# 2. 编译安装
$ cd far_planner
$ catkin_make

# 3. 启动开发环境
$ cd ~/autonomous_exploration_development_environment
$ source devel/setup.sh
$ roslaunch vehicle_simulator system_xxx.launch
# xxx 可替换为对应的 5 种环境之一:campus、forest、garage、indoor、tunnel
# 4. 启动 FAR Planner(另开一个终端)
$ cd ~/far_planner
$ source devel/setup.sh
$ roslaunch far_planner far_planner.launch

Part Ⅳ Integrating System on Real Robot (to be continued…)

Part Ⅴ Algorithms Annotation (to be continued…)

Guess you like

Origin blog.csdn.net/qq_42994487/article/details/130307410