Conceptual architecture of autonomous driving and assisted driving systems (1)

Summary:

The main introduction of this article includes functional module diagrams, covering underlying computing units, sample workloads, and industry standards.

Preface

This document refers to the Autonomous Vehicle Computing Consortium's conceptual system architecture for autonomous driving and assisted driving computing systems.

The architecture is designed to be consistent with SAE L1-L5 levels of autonomous driving. The main introduction of this article includes functional module diagrams, covering underlying computing units, sample workloads, and industry standards.

This article is the first part which mainly introduces each functional module.   

picture

Autonomous driving system architecture components

1. Introduction to functional module subsystems



 

1.1 ADS core computing module (blue block part)

 

Perception Module  -  Uses sensor data, vehicle odometer data, and back-end information (i.e. map data) to detect and track infrastructure and objects within the sensor's field of view to produce data on objects, features, or specifications.

Scene Understanding  - Assesses the current driving scenario and predicts or anticipates the intent and behavior of dynamic entities relative to the autonomous vehicle.

Ego Motion  -  Estimate the motion of the vehicle using different sensor inputs, such as data from an inertial measurement unit (IMU) and wheel speed sensors, etc.

Positioning  -  Determining the position, orientation and direction of the vehicle.

Motion Control  -  Interacts with actuators including brakes, steering and transmission to achieve desired trajectories.

Mission Control -  Maintain or change vehicle mission based on passenger status, vehicle operator request, or direct feedback from the behavioral planner; provides feedback to the operator.

Occupant Monitoring  - Determine the status of vehicle occupants and identify situations that may require modification of missions, Dynamic Driving Tasks (DDT) and/or strategic planning (navigation).

Path Planning -  Determining the route a vehicle should take to reach a target destination.

Behavioral planning -  making maneuvering decisions (e.g. changing lanes, overtaking, emergency stops, etc.) within defined route objectives.

Trajectory Planning -  Plans the maneuvering path and provides the motion control module with a target trajectory.

Operational Domain Monitoring -  Monitors the capabilities, status, and conditions of entities involved in dynamic driving tasks to ensure that the vehicle is operating within the ODD (Operational Design Domain).   

2. Service module (gray block part)

Human Machine Interface (HM I) -  Serves as the primary interface for passengers in a vehicle to provide input to the system (e.g., pedals, steering wheel, graphical user interface, or other), influence the behavior of the system, or provide the user with the status of the current task or tasks ahead of them and/or constrained feedback. HMI can also serve as the interface between the vehicle and the outside world to ensure that pedestrians or other vehicles are aware of the vehicle's intentions, health status or operating status.

Connected services -  Provide data interfaces to ensure that autonomous driving systems receive the latest map, traffic or other data to support the tasks ahead. Communication is bidirectional, such as providing autonomous driving system information, passenger health status or other related data.

Vehicle-to-X (V2X) -  Vehicle-to-infrastructure or vehicle-to-vehicle as relevant system input to support safe and efficient operation of the vehicle. This can provide information about infrastructure (such as traffic light status or location), or information about vehicles that the autonomous vehicle might not see.  
 

3. Cross-functional attributes (yellow block part)

The above building blocks represent the main modules of end-to-end functionality from L1 to L5. The yellow module involves cross-functional attributes. Although important, it is not the key to computing functions and will not be fully covered in this article.   

2. Core functions and interactions of autonomous driving systems


 

This section describes the core functional modules of the autonomous driving system and their interactions. Interactions between modules are expressed through data transfers, which I describe as "signals". These signals are included to help understand the types of algorithms/computations that may exist within each functional module.

Because the functional architecture is designed to accommodate different levels of autonomous driving scalability and a wide range of autonomous driving system solutions, the signals required for each functional module will vary between different implementations.

Furthermore, the description of these signals leaves room for interpretation in many cases, ensuring that the diversity of current solutions/approaches is reflected.

picture

ADS core functionality and interactions

2.1 Mission Control

picture

mission control

Mission control combines input from vehicle occupants, vehicle operators (drivers or remote operators), and operational domain supervision to maintain or change autonomous vehicle mission problem goals and boundaries into path planning. In this task, it uses two key abstractions:

In this task, it uses two key abstractions:

  • Autonomous vehicle mission: represents the combination of usage purpose and driving status. Includes target destinations, trip interruptions, iteration and waiting modes, and transitions between driving states during the mission.
  • Driving state: Represents the state of attention and operational authority of autonomous vehicles, drivers, and potentially supervising fleet operators. Driving states include transitions to minimal risk conditions, manual driving, and various levels of assisted and automated driving.

Note: In the context of assisted driving (SAE L0-L2 level, including L2+ extension), the following explanation is proposed for the general function block diagram: user-active operations, such as stepping on the accelerator and turning the steering wheel to directly control or slightly guide the vehicle trajectory, From this simplified architectural perspective are short-term settings changes to mission objectives and user route preferences. These changes are received from the Human Machine Interface via HMI task requests.

Therefore, these abstract concepts have an extended traditional meaning. These interpretations include "turn right immediately" or "accelerate" (both user route preferences)) as well as more complex assistive behaviors such as "monitored automatic overtaking if traffic conditions permit" or "change depending on whether the left turn indicator is on" Adaptive Cruise Control (ACC)”.

As a downstream result, "route goals" (passed to behavioral planning) may reflect active user manipulation.

Mission Control Input:

  • Operational domain supervision task request : A change in the autonomous vehicle task triggered by conditional analysis of the operation domain supervision system, which may initiate short-term changes in driving status and perform state transitions under mission control responsibility. A mission change may also result in a mission target signal change.
  • Connection service task request : The connection service triggers a change in the autonomous vehicle task, which will cause the task target signal to change. As a second-order effect of the modified task, the driving state may change.
  • Human-machine interface task request: Any type of user interaction triggers a change in the autonomous vehicle task, which may immediately change the driving state and thereby change the task target signal.
  • Behavior Planning Task Request: Behavior planning triggers changes in autonomous vehicle tasks, such as recognizing/responding to changes in traffic conditions to recalculate user route preferences.

Mission control output:

  • User route preferences: Generated by extracting data from autonomous vehicle tasks to complete tasks while minimizing the risk of leaving the current operating domain.
  • Mission goals: Generated from autonomous vehicle missions, expressed as usage goals.
  • Task feedback: Integrate the autonomous vehicle task and four request signals to determine whether driver alerts, specific action requests, or progress reports are needed.


 

2.2 Occupant monitoring

picture

Occupant monitoring

The passenger monitoring function module is responsible for observing the status of passengers in the car and providing relevant status data to other parts of the assisted driving/autonomous driving system. Passengers are sensed through one or more dedicated in-car sensors. May include monitoring of the driver, and/or monitoring of passengers to sense status, potential medical emergencies, or inappropriate behavior conditions. In addition to the autonomous driving unit, it can also be provided by other systems in the vehicle. For example, a vehicle cockpit control system may include a superset of passenger monitoring states required by assisted/autonomous driving systems, such as gesture recognition for in-car entertainment. In this case, the input to this module will be the passenger monitoring status.   

 Examples of potential passenger monitoring:

  • Driver Monitoring System (DMS): For assisted driving/autonomous driving systems that may require driver intervention/failure switching, the driver's readiness status can be monitored. The readiness state can include consciousness, drowsiness, emotion, health (for example: heartbeat detection), posture, etc.
  • Passenger Monitoring Systems: Fully automated driving systems may need to change destination and route parameters based on in-car emergencies (e.g., heart attack, epileptic seizure, dangerous or violent behavior). Parameters that can be monitored include: occupied seats, number of passengers, passenger health, posture, etc.
  • Self-driving taxi systems: May include detecting personal items left in the vehicle.

Passenger monitoring input:

  • In-car sensor data: used to extract information and status of passengers in the car.

Passenger Monitor Output:

  • Passenger status: Provides status data of passengers in the vehicle to the rest of the assisted/autonomous driving system, for example, to determine driver capabilities and passenger status.


 

2.3 Perception

The perception function module is responsible for detecting, classifying and tracking entities and events in the vicinity of the autonomous vehicle. Data from on-board sensors can be combined with information from other sources, such as HD maps, V2X or connected services, to accomplish this task.

The perception module is responsible for establishing and updating a virtual representation of the environment within the vehicle's perception range.

picture

Perception function block diagram

The perception module may include the following algorithms:

  • Detect infrastructure elements such as drivable pavement, road signs, traffic lights, curbs, traffic cones, construction, railings, etc., as well as the dynamic properties of these elements (e.g., toll gate railings down, red lights, etc.).
  • Detect, classify and track dynamic entities such as vehicles, pedestrians and obstacles.
  • Detect environmental conditions such as possible weather, fire/smoke, and slippery roads.
  • Recognition/classification of more complex aspects such as human posture, erratic driving, hazardous loads, etc. may also be part of the perception module.

The perception module receives as input environmental sensor data in a canonical format. This data may come from one or more sensors, which may be based on similar or different sensing technologies (e.g., camera, radar, lidar, ultrasound) and may have overlapping fields of view. If the same physical entity is "seen" by multiple sensors, multi-sensor fusion algorithms can be employed to produce a unified view of these entities. Algorithms that track the temporal changes of entities can be employed to maintain the existence probabilities of these entities and predict short-term predictions of their paths/states. Almost all detections need to be mapped to a common world coordinate system.  

Sensory input:

  • Environmental Sensor Data: Receive data from environmental sensors in a canonical form (varying depending on the sensor technology) that is processed and/or analyzed to make it easier for downstream functional modules to consume and process the data, or from the perception of autonomous vehicles Extract actionable information about the environment and/or entities within the scope.
  • Pose: Can be used to pre-select certain parts of the perceptual field of view for feature/object detection (whether using HD maps or not).
  • Map data: Detection can be made more reliable by comparing a priori assumed infrastructure with observations from sensors.
  • Ego Motion: Can be used to adjust sensor readings to compensate for ego vehicle motion.
  • Region of Interest (ROI): Can be used to configure a sensor or algorithm to focus attention (resolution/processing) on ​​a specific area of ​​the sensing range. For example, this could help sensors with uneven resolution be configured so that roads/infrastructure of specific interest are "seen" at a higher resolution. Similarly, certain portions of a high-resolution image can be extracted and processed at full resolution to determine the status of a traffic light.
  • V2X: Provides an additional source of information that can be leveraged to make detection more reliable (disambiguation).   

Perceptual output:

  • Specification: Data can be output in the same form as the original input data, with some additional processing or transformation. Sensor data may have modifications to the original data without changing the data format. An example of a specification is an image, which can be modified but still be an image (in contrast to converting a pixel image to a list of objects).
  • Features: Varies based on implementation. They can be represented as 3D world coordinates or not; they can be tracked or not.
  • Object: Used to populate the environment/world model. They describe the detected static and dynamic entities within the autonomous vehicle's perception range, with relevance for subsequent processing stages.
  • Perceived Capabilities: Provides information on the dynamic capabilities of perceptual functions. This can be simply expressed in terms of the range of perception and the estimated latency of detection within this range. The capability itself can be derived from the capabilities of individual sensors under various lighting/weather/environmental conditions.


 

2.4 Positioning

picture

Positioning function module diagram

In the context of driver-assisted or autonomous vehicles, localization refers to the process of identifying the vehicle's posture (position and orientation) in the world and in the vehicle's mapping subsystem. This process may rely on input from various sensors (GNSS, cameras, LiDAR, etc.) and will provide information for other aspects of autonomous vehicle operation.

"Positioning" covers a wide range of areas. A simple implementation may only include the fusion of some vehicle motion with raw GNSS output, while a complex implementation may be processing input from 20 or more sensors and comparing the received data to the locations stored in the map. There may be over 10MB of map data per mile.

The associated computational loads range from negligible to far beyond the capabilities of typical automotive embedded microcontrollers today. Computational load will also vary greatly based on the amount of preprocessing of vision, lidar, radar and other sensors prior to "localization".

Positioning input:

  • GNSS data: used to directly identify possible locations on a map. It can also be used to select areas of interest within the map for further location optimization operations.
  • Automobile motion: can be used to improve the estimation of own position and attitude.
  • Norms | Features | Objects: Compares features identified by sensors with those contained in the map data to improve estimation of own position and pose.
  • V2X data: compared with map information to improve the estimation of own position and attitude.
  • Map data: Reference this data to identify the most likely location of the autonomous vehicle.

Position output:

  • Pose: Provides the most probable position and orientation of the ego vehicle on the map.

 

2.5 Scene Understanding

picture

Scene understanding function module diagram

The scene understanding function module embodies the algorithm responsible for "understanding" the current driving scene. If autonomous vehicles are to intelligently maneuver in a shared driving space, it is necessary to predict/foresee the actions of other entities within that space. Scenario or scenario understanding is not just about identifying the "state" of the current situation, but also about predicting how it will evolve.    

Algorithms in this functional module may be able to simulate multiple cause-and-effect scenarios to help select the best course of action for autonomous vehicles. However, scene understanding by itself does not make any decisions about the actions an autonomous vehicle should take, nor does it select which course of action to simulate.

  Scene understanding input:

  • Map data: Provides the layout of infrastructure in the current driving scenario. Useful for predicting the intentions/behavior of other road users/entities.
  • Static Objects: Identify immovable objects in driving scenes that have an impact on the drivable space that autonomous vehicles and other road entities may occupy. Static objects may have dynamically changing states, such as a traffic light turning red or a toll booth becoming closed.
  • Dynamic objects: Used to predict how the current driving scene may evolve, mainly focusing on the trajectory and state changes of dynamic objects. An example of prediction for a dynamic object is a vehicle that just turns on its left turn signal and then turns left.
  • V2X data: May provide information that can be used to foresee the evolution of current driving scenarios, such as traffic lights that are about to change.
  • Self-vehicle motion: Provide autonomous vehicle self-vehicle motion information to the prediction engine.
  • Attitude: Allows predictive engines to place autonomous vehicles on the map.
  • Autonomous vehicle hypothetical maneuvers: Provide the prediction engine with hypothetical maneuvers for autonomous vehicles. This maneuver may trigger reactions from other road users and needs to be anticipated. For example, a diversion may force another oncoming road user in that lane to make dangerous braking. Multiple autonomous vehicle maneuver hypotheses may be submitted to the scene understanding functional module before behavior planning makes a decision. 

Scene understanding output:

  • Dynamic object prediction: used to safely plan changes to the current trajectory of autonomous vehicles.
  • Static object prediction: used to safely plan changes to the current trajectory of autonomous vehicles.
  • Scene-Based Motion Constraints: Constrain the motion of the autonomous vehicle according to the driving scene conditions.
  • Scenario-Based Maneuver Constraints: Constrain an autonomous vehicle's planned maneuvers based on driving scenario conditions.
  • Scenario-Based Routing Constraints: Constrains the autonomous vehicle's route planning based on driving scenarios.
  • Scenario Data: Provides a collective view of driving conditions that can be leveraged to ensure autonomous vehicles operate within their design parameters.     

 

2.6 Bicycling

picture

Automobile motion function module diagram

The self-vehicle motion module estimates changes in vehicle attitude (position + direction) over time. Through the fusion of calculations from multiple different types of sensors and improved motion estimates, more accurate and reliable estimates can be obtained than single sensor measurements.

Depending on the system, the number of inputs used varies. A simpler ego-motion component might just process data from the IMU and chassis sensors. However, more complex systems may use all or a subset of the other inputs to compute additional motion estimates, which are then fused together.     

Automobile motion input:

  • Norm|Feature|Object: Use a continuous sequence of sensory data to estimate self-vehicle motion. The estimation can be based on various algorithms with different computational complexity. Examples are optical flow and CNN-based estimators that exploit pixel-level data or already detected features. You can also use classified objects with known speeds (such as guardrails).
  • Chassis sensor data: includes control/actuator feedback to improve self-vehicle motion estimation.
  • IMU data: Contains accelerometer and gyroscope sensor data to provide self-vehicle motion estimation.
  • Compass data: Provides absolute rotation measurements to improve vehicle motion estimation.
  • GNSS data: Compute vehicle motion estimates from multiple sequential geospatial position readings.    

Automobile motion output:

  • Ego Motion: Providing ego motion estimation enables multiple other components in the autonomous driving system (e.g. perception or localization) to perform their tasks. 


 

2.7 Path planning

 

picture

Path planning function module diagram  

The path planning function module provides algorithms to determine the real-time path to reach the target destination. It accepts a driver's desired destination and calculates the shortest path to that destination from the vehicle's current location, taking into account driver preferences and traffic conditions. Also known as mission planning, it is responsible for breaking down the desired task of "getting from A to B" into structured road segments, as specified and defined by the provided map (a typical example is to provide lane-level subtasks).

A set of lane-level subtasks are output, which describe the required lanes and turns for vehicles at each intersection. In addition, when the vehicle completes the current lane-level subtask, it automatically calculates the next set of goals and provides the next lane-level subtask. At intersections with stop signs, traffic lights or yield requirements, the module consults input from the perception system to decide whether the car can proceed to the next turn. 

 Path planning input:

  • V2X data: for receiving traffic information or road alerts.
  • Map data: calculate routes.
  • Attitude: Provides the starting point of the route.
  • Mission target: Specify the target destination.
  • Traffic conditions: Provides dynamic source information on traffic conditions near autonomous vehicles or on route plans.
  • User Routing Preferences: Provides preferences or rules that constrain route selection. Potentially include ride-hailing services or the autonomous vehicle itself staying within the operational design domain or avoiding the constraints of toll roads.
  • Scenario-based route constraints: Provides dynamically determined route constraints such as closed road signs, closed lanes on a highway, or highway entrances closed with roadblocks.
  • Route Constraints Based on Operational Domain Monitoring: Routes are constrained due to the need to stay within the operational design domain. An example is avoiding certain routes at certain times due to lack of street lighting.

Path planning output:

  • Path planning: Provides a description of the route the autonomous vehicle will take, including applicable lanes, to reach the target destination.  


 

2.8 Behavior planning (driving strategy)

The Behavior Planning (BP) functional module provides algorithms to make maneuver decisions within route objectives.

picture

Behavior planning function module diagram

Maneuvers using a multi-model path planning algorithm. Given target tracking and predicted behavior of all dynamic objects within a space and corridor, the behavior planner evaluates multiple possible maneuvers simultaneously and then correlates them with updated road observations.

Behavioral planning requires balancing driving efficiency with vehicle safety and comfort. Driving efficiency means determining the best lane or road to get to your destination quickly, while comfort considerations mean getting to that lane or corridor safely. Lane ranking and feasibility checks are two core elements of vehicle behavior planning.

Regarding lane ranking, the algorithm follows three main principles:

1. The fewer lane changes, the better. 

2. The further away from the moving object in front, the higher the score.

3. The faster the object ahead, the faster the vehicle will travel in the lane.

After ranking each possible lane, define their feasibility and assign costs. The figure below shows an example of how the algorithm defines feasibility and selects a list of cheaper maneuvers.

picture

Lane sorting decision tree

The list of maneuvers contains high-level semantic decisions and physical parameters to be performed by the vehicle. Examples of maneuvers could be (not exhaustive):

  • •Cruise: Within the current lane, at set speed.
  • •Follow: Keep in the current lane, drive at the provided speed limit, follow the vehicle in front with the provided speed and ID at the minimum distance.
  • •Turn: Turn from the current lane to the target lane, turn left or right, at the provided steering speed.
  • •Reroute: Move from the current lane to the target lane, overtaking the target vehicle with the provided acceleration [or letting the target vehicle pass with the provided deceleration].
  • •Stop: Slow down to zero speed within the provided distance and stay in the current lane.   

Behavior planning input:

  • Static object prediction: for assessing safety and comfort of maneuvers.
  • Dynamic object prediction: for assessing safety and comfort of maneuvers.
  • Route objectives: Provide basic guidelines, such as lanes or travel corridors, for maneuvering decisions.
  • Scenario-based maneuver constraints: Used to exclude maneuvers that are prohibited by current driving scenario conditions.
  • Maneuver Constraints Based on Operational Domain Monitoring: Ensures that the resulting maneuver of the autonomous vehicle does not violate the current operational design domain boundary conditions.
  • Automobile motion: used for maneuver assessment.

Behavior planning output:

  • Region of Interest (ROI): Provides a description of one or more sensing areas that should be prioritized.
  • Maneuvering: Provides high-level semantic decisions and physical parameters to be executed by autonomous vehicles.
  • Behavior Planning Task Request: A modification task is requested by the Behavior Planner. This may be required if the Behavior Planner determines that the task cannot proceed as planned. The nature of the request could be a request for a human driver to take over, for example.
  • Autonomous Vehicle Maneuver Hypotheses: Prepare one or more hypothetical maneuvers for which the system can predict one or more outcomes.         

 

2.9 Trajectory planning (path planning)


 

picture

Trajectory planning function module diagram  

The trajectory planning function module provides algorithms to plan maneuvering paths to control steering, braking and acceleration. It works closely with behavioral planning, and sometimes both are obtained as outputs of the same algorithm, or in the form of feedback recursive adjustments.

Autonomous vehicles rely on real-time vehicle status and environmental information (such as surrounding vehicles, road conditions) to obtain local trajectories that ensure safe passage while minimizing deviations from the overall travel trajectory (the global trajectory from path planning). Local trajectory planning can be defined as planning the transition of a vehicle from one feasible state to the next feasible state in real time. This is all done within the constraints of meeting vehicle kinematic constraints based on vehicle dynamics, passenger comfort, lane boundaries and traffic regulations, while avoiding obstacles.

Limitations such as sensor range, time to predict the movement of traffic participants, and sensor imperfections will limit the maximum speed of the vehicle to calculate maneuvers. Therefore, risk assessment prediction of traffic participant movements is an important part of maneuver planning, which is achieved through a model-based abstraction level of traffic movements. The trajectory planning method for obstacle avoidance employs one of the techniques shown in the table below.

picture

Control strategy advantages and disadvantages

It should be noted that all the above methods assume that the trajectory planning system can obtain accurate knowledge of the environment and the status of the leading vehicle on demand. Unrobust trajectory planning methods may lead to unachievable and/or unsafe reference trajectories, which poses significant safety risks especially during high-speed driving. The various trajectory planning techniques discussed above propose different ways to deal with uncertainty in current environment perception and limited future prediction capabilities.

Trajectory planning input:

  • Static object prediction: for assessing safety and comfort risks in trajectory calculations
  • Dynamic object prediction: for assessing safety and comfort risks in trajectory calculations
  • Maneuver: Provides maneuver targets to be further processed into trajectories
  • Self-vehicle motion: used for trajectory calculation
  • Scene-based motion constraints: Provides vehicle motion constraints for driving scene conditions, such as slippery/degraded road surfaces, to be considered in trajectory calculations.
  • Vehicle Motion Constraints: Provides dynamic motion limit feedback from motion control to be applied to trajectory calculations

Trajectory planning output:

  • Target Trajectory: A target trajectory expressing changes in steering, braking, and acceleration along a trajectory path (curved path)

 

2.10 Motion control (activated)

picture

Motion control function module diagram 

The motion control (activation) function module is responsible for requesting propulsion changes related to autonomous vehicle motion, including but not limited to acceleration requests, braking requests, and steering requests. Responsibilities include:

  • Provides interfaces to various external executive modules, such as Electric Power Steering (EPS), Automatic Braking (ABS), PRNDL transmission gear selection, traction control, etc.
  • Provides the necessary middleware layer to manage complete and sufficient interfaces with external execution modules, which have different levels of complexity and capabilities. For example, an external execution module could include learning capabilities based on environmental data and conditions, or it could be a simple, traditional request-based system.
  • Receive, manage and display the vehicle motion constraints presented by the external execution module, and present them to other functional modules of the automatic driving system. This may include activities such as aggregation, synchronization, statistical analysis and packaging.
  • Transform the goal trajectory into appropriate execution requests for external modules, including considering ego-vehicle motion inputs and external actuator predictions.
  • Preprocess as needed for external execution modules.

Motion Control Input:

  • Target Trajectory: A request from trajectory planning that is the main consideration for the motion control (activation) output.
  • Autonomous vehicle motion: Receives autonomous vehicle attitude information and considers it together with the target trajectory to generate actuator requests.
  • Actuator feedback: Expose vehicle motion constraints to the system. Since motion control (activation) can interface with various external execution modules, this feedback will vary in units, format, type, etc.

Motion control output:

  • Executor Requests: Provides main request output to multiple external execution modules.
  • Vehicle motion constraints: Provide limit feedback from external execution modules, which are summarized and presented to other autonomous driving system modules, such as trajectory planning (path planning).         

 

2.11 Operational Domain Supervision (ODS)

picture

Operation domain supervision function module diagram 

The operational domain supervision module monitors capabilities, status and conditions related to dynamic driving tasks with the goal of ensuring that autonomous vehicles operate within the operational design domain and other applicable dynamic and static constraints. It implements runtime monitoring using two operational domain representations:

  • The Authorized Operational Domain represents the union of the Design Intent Operational Design Domain and the current applicable state, which may evolve during the system life cycle based on legislation, verification, and/or actions to resolve vulnerabilities.
  • The current operating domain represents a conditionally constrained subset of the above baseline domain, modified by the following set of conditions.

Operational domain supervision mainly affects mission control functions, but also guides path planning and behavior planning.

A set of main conditions guides operational domain monitoring:

  • Maneuvering capabilities: Define the maneuvering envelope of autonomous vehicles through the interpretation of scene data, self-vehicle motion and V2X input signals.
  • Traffic Situation: Provides information about other road users at current and future locations.
  • Road system state: Includes dynamic and static aspects of road surfaces, road semantics, and geofencing.
  • Driver Competence: Assesses driver engagement in taking over control, automated emergency actions, or other task changes.
  • Occupant Status: Indicates the autonomous vehicle passenger status to terminate or change the mission.
  • Perception capabilities: Indicates sensor status, algorithm confidence, and environmental conditions related to perception.
  • System integrity: includes the technical status of sensors, actuators, support systems and computing units, covering functional safety, reliability, availability and safety perspectives.

Operation domain supervision signal input:

  • V2X data: Used to determine mobility capabilities, road system status, and traffic conditions.
  • Automobile motion: used to determine maneuverability.
  • Map data: Used to determine the state of the road system and generate routing constraints based on operational domain supervision.
  • Attitude: used to determine the status of the road system.
  • Scenario data: used to determine maneuverability, road system status and traffic conditions.
  • Passenger status: used to determine driver ability and passenger status.
  • Awareness: Directly used as a primary condition for operational domain monitoring.
  • System integrity: used directly as a primary condition for operational domain monitoring.

Operation domain supervision signal output:

  • Path constraints based on operation domain supervision: Route constraints are generated by combining the main condition set and map data to ensure that the boundaries of the current driving state are maintained.
  • Maneuvering constraints based on operation domain supervision: Maneuvering constraints are generated by combining the main condition sets to ensure maneuverability in the current and subsequent states and positions. This allows the vehicle to stay within the current autonomous driving mission without exceeding operating design domain boundary conditions.
  • Task Request for Operational Domain Supervision: If the project will leave the Operational Domain immediately or in the future, this request is generated to address this issue by changing the autonomous driving mission as a mid-term or short-term strategy to avoid leaving the Operational Domain. This request may result in a driver status transition, such as from autonomous driving to manual driving.

 

2.11 SAE level scalability

The complexity within each functional block (e.g. different computing elements, memory, etc.) as well as the signal interfaces (e.g. number of signals and required bandwidth) are scaled according to the automation level.

picture

Function module diagram     

For example, a very simple SAE L1 level function may only require a subset of the function blocks to perform relevant calculations, while other function blocks may only be signal pass-throughs or contribute little to the overall system performance requirements.

The example below illustrates a simple SAE L1 level function with limited contributions from operational domain supervision, positioning, passenger monitoring, mission control, path planning, mapping and V2X (indicated in yellow). The remaining function blocks will make the main contribution to the function implementation.

picture

SAE Level 1 Functional Block Diagram Example

Source|  ZhiCheRobot 

Guess you like

Origin blog.csdn.net/yessunday/article/details/132592107