From SLAM to Situational Awareness: Challenges and Surveys

标题:From SLAM to Situational Awareness: Challenges and Survey

作者:Hriday Bavle,Jose Luis Sanchez-Lopez,Claudio Cimarelli,Ali Tourani,Holger Voos

Editor: Zheng Xinxin @一点Artificial Intelligence

Invitation to join the group : 7 professional direction exchange groups + 1 data demand group

Original article: From SLAM to Situational Awareness: Challenges and Surveys

01 Summary

The ability of mobile robots to efficiently and safely perform complex tasks is limited by their cognition of the environment. Advanced reasoning, decision-making, and execution skills enable intelligent robots to act autonomously in unknown environments.

Situational Awareness (SA, Situational Awareness) is a basic human ability, which has been widely studied in many fields such as psychology, military affairs, aerospace and education. However, in the field of robotics, this capability has not been fully appreciated, and currently only focuses on single and mutually independent concepts, such as sensing, spatial perception, sensor fusion, state estimation, and simultaneous localization and mapping (SLAM). Therefore, this study aims to connect a wide range of multidisciplinary prior knowledge to support the realization of a complete situational awareness system for mobile robots and demonstrate their autonomy.

In this paper, we define the main components and the domains involved in building robot situational awareness. This article describes each aspect of situational awareness and reviews the latest robotics algorithms to explore their current limitations.

It is worth noting that key aspects of situational awareness are still immature as current algorithmic developments limit their applicability to specific environments. However, artificial intelligence (AI), especially deep learning (DL), provides new ways to bridge the gap in these fields in practical scenarios.

Additionally, we see opportunities to connect the highly fragmented space of algorithms for robot understanding through the "situation graph" (S-Graph), an extension of the mainstream scene graph. We finally present our vision for the future development of robotic situational awareness and discuss interesting near-term research directions.

02 Introduction

The robotics industry is experiencing exponential growth towards new technological advancements and applications. Mobile robots are of great interest in the commercial world due to their ability to replace or assist humans in repetitive or dangerous tasks. Currently, mobile robots are used in many industrial and civilian fields, such as inspection of industrial machinery and underground mines, surveillance and road traffic monitoring, civil engineering, agriculture, healthcare, search and rescue intervention in extreme environments (such as natural disasters) and exploration and logistics.

On the one hand, mobile robots can be controlled by manual teleteleoperation or in a semi-autonomous mode, constantly requiring human intervention. Furthermore, applications such as Augmented Reality (AR) can be leveraged to improve human-computer interaction, see [5].

In fully autonomous mode, on the other hand, the robot performs an entire task based on its understanding of the environment with just a few commands. Notably, autonomy can reduce costs and risks while increasing productivity, and sets the goal for current research to address the main challenges it raises.

Unlike autonomous robots in industrial scenarios, which can only act in controlled environments, mobile robots can operate in dynamic, unstructured, and chaotic environments with little knowledge of the scene structure.

So far, the field of robotics has mainly focused on sensors, environment perception, sensor fusion, data analysis, state estimation, simultaneous localization and mapping (SLAM), and artificial intelligence (AI) applied to various image processing problems.

Figure 1 shows data for these fields of study obtained from the Scopus Abstracts and Citations database.

Fig. 1 Scopus database covers robotics and SLAM research since 2015. All work focuses on independent research areas that can be effectively included in the field of situational awareness in robots.

However, autonomous behavior requires an understanding of contexts encompassing multiple disciplines in robotics spanning perception, control, and planning to human-robot interaction. Although SA is a holistic concept widely studied in fields such as psychology, the military, and aerospace, it has received little consideration in robotics.

It is worth noting that Endsley formally defined SA in the 1990s as the concept of “perceiving elements of the environment, understanding their meaning, and predicting their state over time and spatially” and this definition is still applicable today. To this end, we translate this definition into the perspective of mobile robotics to obtain a unified research field covering all aspects required for autonomous systems.

Therefore, the robot's situational awareness system must constantly acquire new observations of the surrounding environment, understand its basic elements and perform complex reasoning, predict the real state into possible future outcomes to make decisions and execute actions, so that the robot can achieve its goals . We therefore describe a general SA architecture in Figure 2, classifying specific competence domains into three layers with progressively increasing levels of intelligence. We pose the following research questions:

Figure 2: Context awareness system architecture for an autonomous mobile robot. We break it down into its main components, namely perception, understanding and prediction, and show how they are interrelated.

Question 1: What are the components of a robot's situational awareness system?

To answer this question, we divide situational awareness into three main components and propose a series of descriptions to bound their scope and define their respective goals:

(1) Situational awareness: including the collection of external information, that is, the surrounding environment, such as external information such as visual light intensity or distance, and perceptual understanding of internal robot values ​​(such as speed or temperature) and situations.

Sensors provide raw measurements that must be transformed to gain the required knowledge, or they can provide information directly to the robot about its state with a small amount of processing. Active distance sensors, for example, provide the distance to objects via a well-defined model. Instead, the camera's pixel intensity values, in addition to being distorted by unknown parameters, rely on complex algorithms to extract meaningful depth, which is still an ongoing research.

Therefore, multiple sensor modalities are crucial to perceive complementary details in the situation, such as the robot's acceleration and scale, visible light intensity and its variation, and compensate for low performance under different conditions, such as dim light, light-transparent materials, or fast motion. Perception thus consists of a series of sensors, assigning specific attributes to each robot, and algorithms that augment the information available to subsequent layers.

Furthermore, since cameras are the primary source of most potential environmental features, image processing algorithms are required to derive situational insights from these sensors. We incorporate these basic algorithms into the next layer of direct understanding, which typically requires a single image frame.

(2) Situational understanding: Starting from understanding current perception, considering possible semantic relationships, using perceptual observations at a given instant in time to construct short-term understanding, called direct situational understanding, or incorporating past acquired knowledge, i.e., accumulated situational understanding long-term model. Multiple abstract relationships can be created to connect concepts in a situational structure model, such as geometric relationships (such as the shape of an object), semantic relationships (such as the type and function of an object), topological relationships (such as order in space), ontological relationships (such as Hierarchy of commonsense concepts), dynamic relations (e.g. motion between objects) or stochastic relations (e.g. including uncertainty information).

Furthermore, situational understanding is also influenced by mechanisms such as attention governed by decision-making and control processes (e.g. seeking specific objects in a room versus obtaining a global overview of the room).

(3) Situational prediction: Future prediction is critical to the decision-making process, and higher-level understanding contributes to this ability. A deeper understanding of the environmental context, including information such as the robot's position, velocity, attitude, and any static or dynamic obstacles in the surrounding area, can lead to more accurate model predictions. The prediction process involves predicting the future state of autonomous robots and external robots to predict behavior and interactions, enabling the robot to adapt its actions to efficiently achieve its goals.

The purpose of the remainder of this paper is to delve into the research questions that naturally arise from the revealed SA themes.

Question 2: What achievements have been made so far and what challenges remain?

Question 3: What might be the future direction of situational awareness?

Thus, by reviewing the current state-of-the-art approaches involving robotics in perception, understanding, and prediction, we examine the field of situational awareness as a whole and understand the progress and limitations of its components. We then discuss in which direction we anticipate research will address the remaining challenges and bridge the gap between robotics and mature intelligent autonomous systems.

03 Summary of the main contributions of this paper

(1) Comprehensive review of state-of-the-art methods: We provide an in-depth analysis of recent research involving enhanced situational awareness for mobile robotic platforms, covering computer vision, deep learning, and SLAM techniques.

(2) Identify and analyze challenges: We categorize and discuss the reviewed methods according to the proposed definition of context awareness for mobile robots, and highlight their existing limitations in achieving full autonomy for mobile robots.

(3) Suggesting future research directions: We provide valuable insights and suggestions on future research directions and open problems that need to be addressed to develop efficient and effective context-aware systems for mobile robotic platforms.

Guess you like

Origin blog.csdn.net/weixin_40359938/article/details/131045388