self-drive car_study notes--Lesson 1: Overview of Autonomous Driving

Background: Knowledge about autonomous driving is like a piece of white paper

20201110

01 Overview of the driverless industry

1.
Self-driving (self-drive car): It can perceive the environment and realize driving behavior under conditions of minimal human intervention.

2. An
unmanned car is a mobile intelligent robot that can carry people. It achieves high-efficiency and high-reliability driving capabilities through rich perception systems and intelligent behavior systems.

3. Autonomous driving:
does not depend on human behavior at all.
Autopilot: In some cases, human participation and supervision can be added
(in general, both are the same)

4.
The trend of future travel: shared travel, new energy vehicles, unmanned driving

5.
What problems can unmanned driving solve: —Improve
traffic efficiency;
—Improve vehicle utilization, no parking space is needed —Reduce
traffic accidents

6. In
2019, the demand for autonomous driving L3 and L4 level R&D personnel is in short supply, and talent positions are concentrated in autonomous driving algorithm and system positions;
Baidu Apollo autonomous driving project is open source; in
2016, the technology began to mature and capital forces poured in.

02Unmanned driving technology path

1.
Level L0~L5, as defined by the American Society of Engineers

2.
L0, a simple point of understanding, is that people drive
L1, cruise control (adaptive cruise control, ACC), the cruise device can control the vehicle longitudinally, accelerate and decelerate;
L2, lane keeping assist (Lane keeping) system can be controlled longitudinally or Horizontal control of the car, but the auto market is assisted by talents;
L2.5 can provide lane-changing capabilities under simple road conditions, such as the current Tesla;
There is a gap between L2 and L3, and there is a problem of power and responsibility (who has an accident? Responsibilities);
L3, on the basis of L2 to provide the ability to change lanes, in a certain period of time the car is the main body of responsibility. For example, the latest Audi A8
L4 is equivalent to fully driverless, that is, most of the time is dominated by the car. Such as waymo, Baidu;
L5, driving ability, almost nothing to do with human beings, l5 vehicles have no steering wheel, pedals and other takeover equipment.

Currently, the demand for L4 talents is the highest in the market

3.
Cars use lidar for automatic navigation, which is costly and meets very few vehicle regulations. Therefore, camera vision is generally used;
Audi A8's level 3 autopilot starting conditions, multi-lane, 60km/h or less, and good weather;

At present, the highest level of autonomous driving is the L4 level, which belongs to the waymo company of the Google camp;

In addition, nuro's logistics trolley has also reached level4.

4. L4 level driverless realization ideas:
1) V2X: vehicle to everything (vehicle-road coordination)
-V2V (vehicles, such as next car A)
-V2I (public facilities, such as traffic lights, etc.)
-V2P (pedestrians, such as roads) People on)

2) Edge calculation-
RSU (Roadside Unit): For example, the pedestrian position information of the zebra crossing determined by the traffic light camera is sent to the unmanned vehicle;
-OBU (On-board unit): On the unmanned vehicle, it receives the information module sent by the RSU

3) 5G communication capability-
LTE-V protocol, a protocol specifically for workshop communication, is compatible with 4G-5G signals; 4G has relatively small bandwidth, relatively high latency, and large data volumes, which will cause transmission congestion.

4) Roadside intelligence: strong perception ability (Baidu ACE plan, the purpose is to build this)

5) Main vehicle intelligence (deep learning fills the last software problem)

6) Sensing ability: highly complex and redundant sensors

7) Decision-making ability: intelligent decision-making under big data

8) High-precision map: rich map data information

9) Positioning: precise location acquisition capability

5. Dealing with the rights and responsibilities of unmanned vehicles, the RSS (resposibility sensitive safety) model (responsibility sensitive safety model)
The purpose of the RSS model is to provide specific and measurable parameters for the concept of responsibility between autonomous vehicles and human The traffic data on file involves behavior and environment analysis and statistics to define a measurable "safe state" for autonomous vehicles. With this rule, the software can make the safest decision even occasionally.

6.
The camera perception of unmanned vehicles uses deep learning.

7. The
difficulty of drones lies in their own control, because once they take off, the trees are static. How to control their own stability in the airflow is a difficult point; the difficulty of unmanned vehicles lies in their dynamic interaction with the outside world.

03 Overview of unmanned driving technology

1. L4 autopilot system architecture:
Insert picture description here
2. Autopilot hardware overview
Insert picture description here
1) Perceptual sensors:
-Cameras: Widely used in object recognition and object tracking scenarios, such as lane line detection, traffic light recognition, etc., generally unmanned vehicles are installed Looking around multiple cameras
-Lidar: used for obstacle position recognition, mapping, auxiliary positioning, etc., its accuracy rate is very high, and many solutions use Lidar as the main sensor

– Millimeter wave radar: Rainy weather, fog and haze weather can assist in sensing the position and speed of objects, and the observation distance is long but there are many false detections
– Ultrasonic: High sensitivity sensor at close range, often used as a safety redundant device to detect collision safety problems of vehicles

2) Positioning system sensors:
-IMU: measure its own pose in real time, 200Hz or higher. Contains three single-axis accelerometers and three single-axis gyroscopes; the accelerometer detects the independent three-axis acceleration signal of the object in the carrier coordinate system, and the gyroscope detects the angular velocity signal of the carrier relative to the navigation coordinate system

–GNSS: Refers to GPS in daily use. Unmanned vehicles generally use RTK (Carrier Phase Difference Technology) for positioning, with a low frequency, about 10 Hz.

3) On-board computing unit:
-Efficiently connect the internal computing equipment of the computing unit, and connect the information input and storage of external sensors
-Redundancy involved to prevent single point of failure
-Need to consider the vehicle regulations, electromagnetic interference and vibration aspects of the vehicle as well as ISO-26262 standard requirements

ISO-26262: A hardware that meets the requirements of ASIL D level, then its failure rate is 10FIT, that is, one failure occurs in 1 billion hours, and the automotive industry can achieve the limit in terms of safety (slow iteration speed)

4) Vehicle wire control system:
-Autopilot wire control system: The control of the car is automatically completed by some simple commands, not by physical operations. (This part is equivalent to human hands and feet, steering wheel, accelerator, brake)

– These controls of traditional cars are assisted by hydraulic systems and vacuum booster pumps. The wire control of autonomous vehicles needs to be completed with electronically controlled components, such as electronic hydraulic brakes.
– Continental (Tier 1) braking solution, dual braking safety strategy, MK C1 integrated hydraulic and braking module, using compact and light weight involving saving braking unit, braking signal sent by electric signal The moving distance is shorter.
Insert picture description here
3. Overview of autopilot software
Insert picture description here
1) RTOS: real-time operating system, such as Windows, mac, Linux
HMI: visual display in the cockpit

2) Sensor perception-determine one's own location-path planning-control
Insert picture description here
3) Operating system OS-
RTOS: real-time operating system-
QNX: Unix-like system, with strong real-time performance, real-time operating system in line with car regulations-
RT Linux: Linux kernel Patches are monitored in real time through software. (The current 2.x version and 4.x version have RT patches)

4) Framework:
-ROS: B->M (The mainstream software framework for autonomous driving uses ROS)
-Others include YARP, Microsoft Robotic, Moos, Cybertron
(Although ROS is the mainstream, its distributed architecture cannot be peer-to-peer. For communication problems, some of the others are made up for this. However, it is undeniable that ROS is still the best)

5) HD Map (High Dimensional)

– Different from navigation maps, the biggest feature is high latitude and high precision
– Accurate three-dimensional representation of the road network, such as intersection layout and location of road signs
– Map semantic information, such as road speed limit, left turn lane start position

-Navigation maps can only achieve meter-level accuracy, high-precision maps need to be able to achieve centimeter-level accuracy -High
-precision map coordinate system: WGS84, Mercator coordinate system
-High-precision map provides data support for other Level4 modules

–Provides a lot of accurate static object information
–Positioning can be used to calculate the relative position
–Help the sensor reduce the detection range, reduce the area of ​​interest (the optimal working range of the device) ROI (can improve the efficiency of the sensor)

– Calculate road navigation information
– help vehicles identify the exact centerline of the road

Comment: The current Level 4 level is developed based on high-precision maps

6) Positioning Localiztion
– The most important step for an unmanned vehicle is to know where you are –
INS: Inertial Navigation System – IMU
gets its own state (acceleration and angular velocity) and then recursively calculate the next moment through the state matrix

-If there is no correction information, this state recursion will accumulate errors over time, resulting in the final position divergence

–RTK: Carrier phase differential system, such as GNSS (GPS)
–RTK has an additional static base station (its position is known, and the precise position relationship with the satellite is also known. When the real-time satellite sends a signal to the base station, a measurement value will be obtained , There is a deviation between the measured value and the precise value), the unmanned vehicle also receives the satellite signal, and also obtains a measured value. If the distance between the base station and the unmanned vehicle is not very far, the deviation value of the base station can be used to determine the difference between the satellite and the unmanned vehicle. Interference signals between unmanned vehicles are differentially smoothed to obtain a more accurate position-
Insert picture description here
RTK provides relatively accurate position information through a lower update frequency, and INS provides less accurate attitude information at a higher frequency. Integrate the two types of data through Kalman Filter to obtain their respective advantages, and combine to provide high-accuracy real-time information

-Geometric positioning: Lidar, camera, high-precision map
-Using lidar or image information, you can locate the car through object matching. Match the detected data with the pre-existing high-precision map, and obtain the global position and driving direction of the unmanned vehicle on the high-precision map through comparison
-iterative closest point (ICP)\Histogram Filter (Histogram Filter)

The idea of ​​iterative closest point is to use lidar, camera point cloud information, and high-precision map point cloud information to compare, select the closest value, and then determine the current relatively accurate position of the unmanned vehicle.

7) Perception
four basic tasks: -Find
the position of the object in the environment
-Know what the object is, such as people, traffic lights (through the camera, etc.)
-Continue to observe the moving object over time and keep it consistent (determine the movement trend of the object) )
– Each pixel in the image is matched with a semantic category, such as roads and sky.
Insert picture description here
The four tasks to achieve a clear boundary perception can be summarized as: detection, classification, tracking, and segmentation

a. Image, point cloud, radar reflection value:
-Learning method: supervised learning, semi-supervised learning, reinforcement learning-
R-CNN series, YOLO, SSD (these are all classic deep learning algorithms)
Insert picture description here
b. Calculate sensor data Fusion problem: front fusion, post fusion
Insert picture description here
Front fusion refers to the fusion
of the basic data of multiple sensors and then the fusion, which refers to the selection of the corresponding sensor data as the standard value after the task is divided, such as the selection of lidar for obstacle detection , Choose camera for obstacle type classification

8) Prediction
– Real-time and accuracy
Prediction based on the state: Kalman Filter, Particle Filter (Here, the state prediction can be understood like this. The current car’s speed, direction and other information are known, and the following two kinds of filters are used to simply calculate the following A position, a simple prediction)

– Prediction based on lane sequence
– Reduced to classification problem by machine learning model
– Pedestrian prediction: Unmanned vehicles need to pay great attention to safety issues, among which human safety is the most important, while pedestrian intention changes are the most difficult to predict and are also constraints Minimal.

9) Decision planning Planning
-navigation route planning and fine trajectory expression
-mathematical problem conversion: transform the map of the physical world into a mathematical representation

–Optimal path search: Because other software modules have eliminated the uncertainty to the greatest extent, and the final decision-making and planning module is a module with extremely high requirements for stability, it can be determined through mathematical optimal path solving Solution, but traversing the optimal solution is very time-consuming
-the body feel and safety of the vehicle need to be considered

Comment: It is necessary to balance timeliness and effectiveness, because it takes a long time to find the optimal solution, which is very dangerous for a moving vehicle.
The body feels in time to make passengers more comfortable. For example, frequent sudden braking can easily lead to vomiting. No one wants to sit.

Mathematical diagram indicates:
Insert picture description here

10) Control
-Input information: target trajectory, vehicle status, output steering wheel, accelerator
-to realize the control of unmanned vehicles, we need to know the relationship between brake and deceleration, the relationship between accelerator and acceleration, etc., when an unmanned vehicle After getting some control parameters, control the unmanned vehicle through the computer
-control is the final guarantee for the entire driving. Therefore, the accuracy, stability and timeliness requirements are very high under any circumstances, and the control is required. Detailed description of the vehicle model and strict mathematical expression

– The traditional control algorithm PID can meet the requirements of vehicle control, but it is necessary to consider the road somatosensory and some extreme conditions. The optimization of the control algorithm is also an ongoing issue of unmanned vehicles, such as LQR, MPC, etc.

Summary of this section:
Autonomous driving aims to solve four problems:
Where am I? What's around me? Where are they going? How can i go?
Insert picture description here
The match is as follows:
Insert picture description here
#####################
B Station University Course Study Notes
Note, the screenshot in the article, the copyright belongs to the original author~

It doesn’t accumulate silicon steps, and
it can reach thousands of miles. A good memory is not as good as a bad pen

Guess you like

Origin blog.csdn.net/qq_45701501/article/details/109609692