This article will take you to understand the full coverage path planning algorithm (CCPP)

Author: K.Fire | Source: Computer Vision Workshop

Add WeChat: dddvisiona, note: Autopilot, you will be included in the group. Industry segmentation groups are attached at the end of the article.

1 Introduction

The task of complete coverage path planning (CCPP) problem is to determine a path that passes through all points in an area or a certain spatial range while avoiding obstacles.

b499b317355dd7aebf221cf14e7aa76c.png

Choset divides full-coverage path planning algorithms into two categories: "online" and "offline" based on whether the environmental map is known a priori. The offline CCPP algorithm only relies on static environment information and assumes that the environment is known a priori. However, assuming sufficient prior knowledge of the environment may be unrealistic in many cases. The online CCPP algorithm does not assume sufficient prior knowledge of the environment to be covered, but uses sensor data to scan the target space in real time. Therefore, these later algorithms are also called sensor-based coverage algorithms. The new course "In-depth Analysis of Vehicle Sensor Spatial Synchronization (Calibration) for Autonomous Driving Field " from the "3D Vision Workshop" is also recommended here .

430f5587e2d35481c00d6ad8f883075c.png

According to the different working principles of the CCPP algorithm, it can be divided into random collision method, unit decomposition method, biological stimulation method, template method, intelligent algorithm, etc. However, the CCPP algorithm should meet the requirements that must be met for coverage, mainly including:

  1. The robot must pass through all points in the target area except obstacles and completely cover the target area;

  2. The robot should try to avoid path duplication during the coverage process;

  3. For convenience of control, simple motion trajectories (for example, straight lines or circles) should be used as much as possible;

  4. Under the allowed conditions, obtain an "optimal" path (the shortest total length of the route or the least energy consumption);

2 Random collision method

The random collision method is essentially an algorithm that trades time for space . Its idea is that the robot chooses any initial direction and drives the robot to walk in a straight line in the environment, covering the area within this straight line. When the robot detects an obstacle When, make the robot rotate a certain angle clockwise and repeat the above process. The coverage area of ​​this strategy is extremely random. In theory, given sufficient time, the robot can cover a sufficient spatial range, but this method is very inefficient. The advantage of this method is that it does not require complex positioning sensors, usually only needs to be equipped with infrared sensors, and does not require huge computing resources; the disadvantage of this method is that there are a large number of repeated paths in the local range, and the environment is suitable The performance is poor. When there are multiple sub-scenes in the environment, especially when the two sub-scenes are connected by a long and narrow corridor, the random collision method may consume a lot of useless time to walk from one area to another.

In actual use, sweeping robots often get into trouble due to dynamic obstacles and other information. Calling random path coverage is also .

5d39f502b57413fd5ef48a09e9f6b742.png

3 unit decomposition method

The unit decomposition method is to divide the free area in the entire space into simple and non-overlapping sub-areas by some method. Each sub-area is called a cell. The union of these cells just fills the entire free space. The robot uses a simple The coverage method (such as reciprocating motion or spiral motion) covers each sub-area. When the coverage of each sub-area is completed, full coverage of the entire area is achieved.

Take the ladder decomposition method as an example , it is the simplest method of accurate cell decomposition. This method first makes the robot walk around the edge of the space to build a map of the entire area, and then uses a vertical cutting line to scan the entire area from left to right. When the cutting line meets the vertex of the polygonal obstacle, it will Decompose a sub-region, so that the entire free space is decomposed into many sub-regions, each sub-region is a trapezoid. The robot reciprocates in each sub-area to cover the sub-area. The trapezoidal decomposition method is shown in the figure.

96d22ae356cd29f2f661a19b6fa1a957.png

Other representative methods include: cattle plowing unit decomposition method [1,2], Morse decomposition method [3,4], line scan segmentation method, etc. I won’t go into too much detail here. If you are interested, you can read the references.

4 biological incentives

Yang and Luo applied the biologically inspired neural network model to the full-coverage path planning of cleaning robots. They first modeled the environment and used a grid map to represent the environment. Each grid point corresponds to a neuron cell, and proposed a shunt equation to calculate the degree of excitation or inhibition of the current neuron by surrounding neurons. The equation is shown below.

Among them, xi represents the state of the i-th neuron; A is a non-negative constant, indicating the decay rate of neuron activity; B and D represent the upper and lower limits of the neuron state; Ii represents the external input, and ωij represents the relationship between the i-th neuron and the j-th neuron. The connection weight of a neuron is generally calculated from the distance between two neurons. The activity value connection diagram of the i-th neuron is shown in the figure.

fe57bdef6523a458b6b4a2194de36da6.png

When the robot is in a certain grid, it will calculate the activity values ​​of surrounding neurons and select the grid with the largest activity value as the next position. The advantage of the biostimulation method is that it has good applicability and performs well in obstacle avoidance and real-time performance. The disadvantage is that it may lead to a high path repetition rate.

5 template method

The template method was proposed by Neumann de Carvalho R and others. It relies on the prior knowledge of the two-dimensional environment map and can handle unexpected obstacles that are not represented on the map. They prescribe the robot's movement behavior into seven fixed templates, as shown below shown. These templates include all situations that the robot may encounter in the map. The robot moves in the map according to the templates, ultimately achieving full coverage of the map.

09b259e76b3af47e8e4027f4954507c2.png

The advantages of the template method are that it has a simple principle, low computational consumption, and can handle dynamic obstacles; its disadvantage is that it requires prior knowledge of the map, has poor applicability, and has low intelligence. The new course "In-depth Analysis of Vehicle Sensor Spatial Synchronization (Calibration) for Autonomous Driving Field " from the "3D Vision Workshop" is also recommended here .

6 Intelligent Algorithms

Wang[5] combined the genetic algorithm with the cattle farming unit decomposition method. After using the dividing line to divide the entire free space into sub-regions, the genetic algorithm is used to encode each sub-region and resume the basic point information between sub-regions. , and obtain the optimal coverage sequence through a genetic algorithm, achieve coverage in the form of back-and-forth motion in each sub-area, and transform the CCPP problem into a Traveling Salesman Problem (TSP).

Zhang[6] introduced the ant colony algorithm into the unit decomposition method, defined a distance matrix based on the connectivity information between two sub-regions, and used the ant colony algorithm to optimize the coverage order based on the distance matrix. Their experimental results showed that such a combined algorithm not only It can ensure that all work spaces are covered, and the planned path is shorter, the path overlap rate is smaller, and the planning efficiency is higher. However, for complex environments, it is difficult to avoid the recovery area near obstacles.

7 References

[1] Choset, H. Coverage of Known Spaces: The Boustrophedon Cellular Decomposition[J]. Autonomous Robots 9, 247–253 (2000). https://doi.org/10.1023/A:1008958800904

[2] Rekleitis , I. , New , AP , Rankin , ES et al. Efficient Boustrophedon Multi-Robot Coverage: an algorithmic approach[J]. Ann Math Artif Intell 52, 109–142. https://doi.org/10.1007/s10472-009-9120-2

[3] Acar EU, Choset H, Rizzi AA, Atkar PN, Hull D. Morse Decompositions for Coverage Tasks[J]. The International Journal of Robotics Research. 2002;21(4):331-344. doi:10.1177/027836402320556359

[4] Acar EU, Choset H. Sensor-based Coverage of Unknown Environments: Incremental Construction of Morse Decompositions[J]. The International Journal of Robotics Research. 2002;21(4):345-366. doi:10.1177/027836402320556368

[5] Z. Chibin, W. Xingsong and D. Yong, Complete Coverage Path Planning Based on Ant Colony Algorithm[C], 2008 15th International Conference on Mechatronics and Machine Vision in Practice, Auckland, New Zealand, 2008, pp. 357-361, doi: 10.1109/MMVIP.2008.4749559.

[6] Gabriely, Y., Rimon, E. Online Scan Coverage of Grid Environments by a Mobile Robot[J]. In: Boissonnat, JD., Burdick, J., Goldberg, K., Hutchinson, S. (eds) Algorithmic Foundations of Robotics V. 2004. Springer Tracts in Advanced Robotics, vol 7. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45058-0_25

—END—

Efficiently learn 3D vision trilogy

The first step is to join the industry exchange group and maintain the advancement of technology.

At present, the workshop has established multiple communities in the direction of 3D vision, including SLAM, industrial 3D vision, and autonomous driving. The subdivision groups include: [ Industrial direction ] 3D point cloud, structured light, robotic arm, defect detection, 3D measurement, TOF, camera calibration, comprehensive group; [ SLAM direction ] multi-sensor fusion, ORB-SLAM, laser SLAM, robot navigation, RTK|GPS|UWB and other sensor exchange groups, SLAM comprehensive discussion group; [ autonomous driving direction ] depth estimation, Transformer , millimeter wave|lidar|visual camera sensor discussion group, multi-sensor calibration, automatic driving comprehensive group, etc. [ 3D reconstruction direction ] NeRF, colmap, OpenMVS, etc. In addition to these, there are also communication groups for job hunting, hardware selection, and visual product implementation. You can add the assistant on WeChat: dddvisiona, note: add group + direction + school | company, the assistant will add you to the group.

05905863a34eca2c6285160776868ab6.jpeg
Add assistant WeChat: cv3d007 to join you in the group
The second step is to join Knowledge Planet and get your questions answered in a timely manner.

Video courses for the field of 3D vision (3D reconstruction, 3D point cloud, structured light, hand-eye calibration, camera calibration, laser/visual SLAM, autonomous driving, etc.), source code sharing, knowledge point summary, introductory and advanced learning routes, latest paper sharing , question answering , etc., and algorithm engineers from various major manufacturers provide technical guidance. At the same time, Planet will work with well-known companies to release 3D vision-related algorithm development positions and project docking information, creating a gathering area for die-hard fans integrating technology, employment, and project docking. 6,000+ Planet members will work together to create a better AI world. Progress, Knowledge Planet Entrance: "3D Vision from Beginner to Master"

Learn 3D vision core technology, scan and view, and get an unconditional refund within 3 daysb58cc176405a53bdb181ff61302923a3.jpeg

High-quality tutorial materials, answers to questions, and help you solve problems efficiently
The third step is to systematically learn 3D vision, deeply understand and run the module knowledge system

If you want to study systematically in a certain subdivision of 3D vision [from theory, code to practice], we recommend the 3D vision quality course learning website: www.3dcver.com

Foundation Course:

[1] In-depth explanation of important C++ modules for three-dimensional vision algorithms: from basic entry to advanced

[2] Linux embedded system tutorial for 3D vision [theory + code + practical]

[3] How to learn camera model and calibration? (Code + actual combat)

[4] ROS2 from entry to mastery: theory and practice

[5] Thoroughly understand dToF radar system design [theory + code + practical]

Industrial 3D Vision Direction Course:

[1] (Second issue) Build a structured light 3D reconstruction system from scratch [theory + source code + practice]

[2] Nanny-level linear structured light (monocular & binocular) 3D reconstruction system tutorial

[3] Robotic arm grabbing from entry to practical course (theory + source code)

[4] Three-dimensional point cloud processing: algorithm and practical summary

[5] Thoroughly understand the point cloud processing tutorial based on Open3D!

[6] 3D visual defect detection tutorial: theory and practice!

SLAM direction courses:

[1] In-depth analysis of the principles, codes and actual combat of 3D laser SLAM technology for the field of robotics

[1] Thorough analysis of laser-vision-IMU-GPS fusion SLAM algorithm: theoretical derivation, code explanation and practical combat

[2] (Second issue) Thoroughly understand 3D laser SLAM based on LOAM framework: source code analysis to algorithm optimization

[3] Thoroughly understand visual-inertial SLAM: In-depth explanation of VINS-Fusion principles and source code analysis

[4] Thoroughly analyze the key algorithms and actual combat of indoor and outdoor laser SLAM (cartographer+LOAM+LIO-SAM)

[5] (Second Issue) ORB-SLAM3 theoretical explanation and code analysis

Visual 3D reconstruction

[1] Thoroughly complete perspective 3D reconstruction: principle analysis, code explanation, and optimization improvements )

Autonomous driving course:

[1]  In-depth analysis of vehicle-mounted sensor spatial synchronization (calibration) for the field of autonomous driving

[2]  China’s first Transformer principle and practical course for the field of autonomous driving target detection

[3] Monocular depth estimation method: algorithm review and code implementation

[4] Full-stack learning route for 3D point cloud target detection in the field of autonomous driving! (Single modal + multimodal/data + code)

[5] How to deploy deep learning models into actual projects? (Classification + Detection + Segmentation)

at last

1. Recruitment of authors for 3D visual article submissions

2. Recruitment of main teachers for 3D vision courses (autonomous driving, SLAM and industrial 3D vision)

3. Top conference paper sharing and 3D vision sensor industry live broadcast invitation

Guess you like

Origin blog.csdn.net/Yong_Qi2015/article/details/132505699