Robot Navigation (1): An Overview

Introduction to the Navigation Module

How do robots navigate? Or in other words, how does the robot move from point A to point B? In order to provide a diagram of the navigation function package, the ROS official includes some key technologies of ROS navigation:
insert image description here
assuming that we have configured the robot in a specific way, the navigation function package will make it move. The figure above outlines this configuration. The white parts are required and implemented components, the gray parts are optional and implemented components, and the blue parts are components that must be created for each robot platform.

To sum up, the key technologies involved are as follows:

  • global map

  • self positioning

  • route plan

  • sport control

  • environmental awareness

The implementation of robot navigation is similar to that of unmanned driving. The key technology is also composed of the above five points, but unmanned driving is based on outdoors, while the robot navigation we are currently introducing is more based on indoors.

global map

In real life, when we need to implement navigation, we may first refer to a global map, then determine our own position and destination position according to the map, and plan a rough route according to the map display... For The same is true for robot navigation. Maps are an important component in robot navigation. Of course, if you want to use a map, you first need to draw a map. Regarding the continuous emergence of map modeling technologies, a theory called SLAM stands out:

  1. SLAM (simultaneous localization and mapping), also known as CML (Concurrent Mapping and Localization), real-time positioning and map construction, or concurrent mapping and positioning. The SLAM problem can be described as: the robot starts to move from an unknown position in an unknown environment, locates itself according to the position estimation and the map during the movement, and builds an incremental map based on its own positioning to draw out the external environment. full map.

  2. In ROS, there are many commonly used SLAM implementations, such as: gmapping, hector_slam, cartographer, rgbdslam, ORB_SLAM ...

  3. Of course, if SLAM is to be completed, the robot must have the ability to perceive the external environment, especially the ability to obtain in-depth information about the surrounding environment. The realization of perception needs to rely on sensors, such as: lidar, camera, RGB-D camera...

  4. SLAM can be used for map generation, and the generated map needs to be saved for subsequent use. The function package for saving maps in ROS is map_server

Also note: Although SLAM is one of the important technologies for robot navigation, the two are not equivalent. To be precise, SLAM only realizes map construction and instant positioning.

self positioning

At the beginning of navigation and during the navigation process, the robot needs to determine its current position. If it is outdoors, then GPS is a good choice. If it is indoors, tunnels, underground or some special areas that shield GPS signals, due to the weakening of GPS signals or even If it is completely unavailable, then another way must be found. For example, the previous SLAM can realize its own positioning. In addition, ROS also provides a function package for positioning: amcl

amcl (adaptiveMonteCarloLocalization) adaptive Monte Carlo positioning is a probabilistic positioning system for 2D mobile robots. It implements an adaptive (or KLD sampling) Monte Carlo localization method that uses particle filters to track the robot's pose against a known map.

route plan

Navigation is the process of the robot moving from point A to point B. In this process, the robot needs to calculate the global movement route according to the target position, and during the movement, it also needs to adjust the movement route according to some dynamic obstacles that appear from time to time until To reach the target point, the process is called path planning. The move_base package is provided in ROS to implement path rules. This function package is mainly composed of two planners:

Global path planning (gloable_planner)

According to the given target point and the global map to achieve overall path planning, use Dijkstra or A* algorithm for global path planning, calculate the optimal route, as the global route

Local real-time planning (local_planner)

In the actual navigation process, the robot may not be able to run according to the given global optimal route. For example, when the robot is running, certain obstacles may appear at any time... The role of local planning is to use certain algorithms (Dynamic Window Approaches) to achieve Obstacle avoidance, and select the current optimal path to meet the global optimal path as much as possible

Global path planning is relative to local path planning. Global path planning focuses on global and macroscopic realization, while local path planning focuses on current and microscopic realization.

sport control

The navigation feature set assumes that it can publish messages of type geometry_msgs/Twist on topic "cmd_vel", which convey motion commands based on the robot's base coordinate system. This means that there must be a node subscribed to the "cmd_vel" topic, which converts the speed commands on this topic into motor commands and sends them.

environmental awareness

Perceive the surrounding environment information, such as: camera, lidar, encoder... The camera and lidar can be used to perceive the depth information of the external environment. The encoder can sense the speed information of the motor, and then can obtain speed information and generate odometer information.

In the navigation function package set, environmental perception is also an important module implementation, which provides support for other modules. Other modules such as: SLAM, amcl, move_base all need to rely on environmental awareness.

Coordinate System for Navigation

Introduction

Positioning is one of the important implementations in navigation. The so-called positioning is to refer to a certain coordinate system (for example: create a coordinate system with the starting point of the robot as the origin) and mark the robot in this coordinate system. The positioning principle seems simple, but this coordinate system does not exist objectively, and we cannot determine the pose of the robot from the perspective of God. The realization of positioning depends on the robot itself. The robot needs to reversely deduce the origin of the reference system and calculate the relative relationship of the coordinate system. There are two common ways to implement this process:

  • Positioning through the odometer: Collect the speed information of the robot from time to time to calculate and publish the relative relationship between the robot coordinate system and the parent reference system.
  • Positioning through sensors: collect external environment information through sensors, calculate and release the relative relationship between the robot coordinate system and the parent reference system through matching.

Both approaches are frequently used in navigation.

features

Both positioning methods have their own advantages and disadvantages.

Odometer positioning:

  • Pros: The odometer positioning information is continuous without discrete jumps.
  • Disadvantages: There is a cumulative error in the odometer, which is not conducive to long-distance or long-term positioning.

Sensor positioning:

  • Advantages: more accurate positioning than the odometer;
  • Disadvantages: There will be jumps in the positioning of the sensor, and the positioning accuracy of the sensor will be greatly reduced when the sensor is positioned in an environment with few markers.

The advantages and disadvantages of the two positioning methods are complementary, and they are generally used in combination.

Coordinate system transformation

In the above two positioning implementations, the robot coordinate system generally uses the root coordinate system (base_link or base_footprint) in the robot model. When the odometer is positioned, the parent coordinate system is generally called odom. If the sensor is used for positioning, the parent reference system is generally Call it a map. When the two are used in combination, both map and odom are the parent of the root coordinate system of the robot model, which does not conform to the principle of "single inheritance" in coordinate transformation, so the conversion relationship is generally set as: map -> doom -> base_link or base_footprint.

Description of navigation conditions

The implementation of navigation has certain requirements in terms of hardware and software, and needs to be prepared in advance.

hardware

Although the navigation feature set is designed to be as general as possible, there are still three main hardware limitations when using it:

  1. It is designed for differentially driven wheeled robots. It assumes that the chassis is controlled by an ideal motion command and can achieve the expected result. The format of the command is: x velocity component, y velocity component, angular velocity (theta) component.

It requires a single-line lidar mounted on the chassis. This lidar is used for building maps and localization.

The navigation feature set was developed for square robots, so square or round robots will perform best. It will also work on robots of any shape and size, but larger robots will have a hard time getting through tight spaces.

software

Before the navigation function is implemented, some software environments need to be built:

Needless to say, ROS must be installed first

The current navigation is based on the simulation environment, first ensure that the robot system simulation in the previous chapter can be executed normally

In the simulation environment, the robot can normally receive the /cmd_vel message and publish the odometer message, and the sensor message is also released normally, that is, the motion control and environmental perception in the navigation module are completed

In the follow-up navigation implementation, we mainly focus on: using SLAM to draw maps, map services, self-positioning and path planning.

Guess you like

Origin blog.csdn.net/weixin_42990464/article/details/131889335