Autonomous driving and vehicle-road coordination


Article Summary

Autonomous driving is becoming the biggest variable in transportation. Anyone can cause traffic jams. As long as you step on the brake, the driver behind you has to step on the brake, and then the driver behind him has to do the same, so that the behavior of braking and restarting can be transmitted for several kilometers.

Recently, I was reading the book "Intelligent Transportation" (a major change that will affect human beings in the next 10-40 years) written by Robin Li, the founder of Baidu. With the help of this article, I simply organized and summarized some of my personal notes.


1. Autonomous driving

insert image description here

1.1 Autonomous Driving Standards

There are currently two types of autonomous driving standards, one is an international standard, and the other is a self-defined standard in my country.


1) Classification of autonomous driving defined by the International Society of Automatic Machine Engineers (SAE)

The automatic driving classification defined by the International Society of Automatic Machine Engineers (SAE) is currently the most common reference standard in the industry. The automatic driving classification of many countries refers to the SAE J3016 automatic driving classification standard.insert image description here



2) Classification of autonomous driving in China

On August 20, 2021, the recommended national standard GB/T40429-2021 "Automotive Driving Automation Classification" proposed by the Ministry of Industry and Information Technology and under the jurisdiction of the National Automotive Standardization Technical Committee was approved by the State Administration for Market Regulation and the National Standardization Management Committee. (National Standard Announcement No. 11 of 2021), which will come into force on March 1, 2022.
insert image description here

Level 0 driving automation (emergency assistance) : The driving automation system cannot continuously perform vehicle lateral or longitudinal motion control in dynamic driving tasks, but has the ability to continuously perform some targets and event detection and response of dynamic driving tasks.

Level 1 driving automation (partial driving assistance) : the driving automation system continuously performs the vehicle lateral or longitudinal motion control in dynamic driving tasks within its design operating conditions, and has a part adapted to the vehicle lateral or longitudinal motion control performed by the lock Target and incident detection and response capabilities.

Level 2 driving automation (combined driving assistance) : The driving automation system continuously performs the control of the vehicle's lateral and longitudinal motion in dynamic driving tasks within its design operating conditions, and has a part adapted to the executed vehicle lateral and longitudinal motion control Target and incident detection and response capabilities.

Level 3 driving automation (conditional autonomous driving) : The driving automation system continuously performs all dynamic driving tasks within its designed operating conditions.

Level 4 driving automation (highly automated driving) : The driving automation system continuously performs all dynamic driving tasks and performs dynamic driving task takeover within its design operating conditions.

Level 5 driving automation (full autonomous driving) : the driving automation system can operate in any driving condition


1.2 Autonomous Driving Route

The autonomous driving we see today can be divided into two major routes:

One is the "low perception + high processing capability" approach represented by Tesla;
the other is the "high perception + high processing capability" approach represented by Google's autonomous driving.

1. Google's track

The perception ability of self-driving cars is realized by sensors.
A self-driving car at Google Track includes sensors such as cameras, ultrasonic radar, millimeter wave radar, and lidar.
The data collected by them are fused together through algorithms to judge the distance of obstacles and capture the visual details of objects.
Lidar is the key to judging whether autonomous driving is Google or Tesla


Lidar : The principle is "speed x time = distance".
A beam of light is emitted, it will be reflected back after hitting an obstacle, and then the reflected signal will be received. The time difference between times multiplied by half of the speed of light is the distance of the obstacle.
One beam of light is not very useful, but if 128 laser beams are fired at a time, 100 times per second, and it is still 360-degree fast rotation scanning, the usefulness will become greater. The collected data can restore the details of all obstacles in a space with a radius of 150 meters and 20 meters from the ground to the sky, with an accuracy of 2 centimeters.
At the same time, lidar will not be affected by light, and can still make accurate judgments even at night without lighting.


Camera : LiDAR can only judge the three-dimensional outline of an object, and cannot identify the color, texture, material and other content in a very delicate manner. Therefore, it is necessary to use a camera to collect physical color and texture information, such as traffic lights, traffic signs, etc. In a word: Anything that the lidar is not very sensitive to is supplemented by the camera. Then use algorithms and image recognition technology to provide accurate data for vehicle autonomous driving decisions.


2. Tesla's track

A self-driving car at the Tesla track includes sensors like: Cameras.
Tesla adopts the "pure vision" method, and its reason is that both humans and animals make judgments in this way.

But this premise is that both humans and animals have brains, and the level of visual processing is extremely high.
As long as lidar is not used, the task of "perception" cannot be completed 100%, and some situations that were not thought of before will be missed. What the camera sees under the backlight is quite distorted. For example, when the vehicle is driving forward, suddenly a white and blue truck crosses the road, and it is backlit. The camera in the car may recognize it from the blue sky and white clouds, or the sign above the road, and then keep it as it is. The speed of the car hit directly.

Compensation method: Image recognition technology, as the last line of defense, can accurately identify that the opposite is a van, not a wide blue sky and white clouds, even in the case of backlighting, and then execute the brakes.


3. Track selection

At present, most car companies have chosen the route of the Google track. Except for Internet companies such as Google, Baidu, and Uber, traditional companies such as Ford and General Motors, and new domestic car manufacturers such as Weilai and Xiaopeng, they all follow Google. track route.


So why is Tesla so maverick? Musk gave these two reasons:
1) Multiple sensors cooperate with each other, which sounds good, but there will be situations where the perception results are contradictory, which is difficult to deal with.
2) LiDAR cannot be used alone, it requires the matching of high-precision maps, so the usage scenarios of the car are very limited, and it cannot be used globally.

Of course, there's a reason Musk didn't articulate: lidar is expensive.
The price of lidar that can meet the requirements of autonomous driving can buy a Tesla at first, and the cost of a car camera is only 30 US dollars.


So which track will autonomous driving ultimately stand in in the future?
It is estimated that this depends on whether the evolution of artificial intelligence is faster or the cost of lidar is lower.
At present, it seems that Google Track has a better chance of winning.

The domestic media is also very concerned about this. Everyone's reports have this sentence, which is called "2022 is the first year of mass production of LiDAR . " It means that the lidar has finally broken through the small-scale testing stage, and can be officially used as supporting equipment for auto-driving cars and mass-produced. According to incomplete statistics, at least 20 models equipped with lidar will be announced in 2022, and most cars will be equipped with 3, and most will be equipped with 4.


1.3 Technical Difficulties of L4 and Above Autonomous Driving

The realization of L4 level and above autonomous driving, in terms of hardware and software, computing power, algorithms, and data, the difficulty of counting is not a linear increase, but an exponential increase.


In terms of hardware , autonomous driving has a complete set of autonomous driving hardware, including lidar, millimeter wave radar, camera, ultrasonic sensor, GPS positioning device and other perception and positioning equipment. Chips and computing platforms are also indispensable as the brains of self-driving cars.


In terms of software , the autonomous driving software algorithm consists of multiple subsystems that need to work together to function:
insert image description here

High-precision map , providing the road environment and road topology; the absolute accuracy and relative accuracy are within 0.1 meters of high-precision, high-freshness, and high-rich electronic maps. High-precision maps not only include road types, curvatures, and lanes Static road information such as line positions and traffic signs, as well as real-time dynamic information such as traffic flow and traffic lights. High-precision maps have the function of constructing the overall memory and cognition of space similar to the human brain, which can help cars predict complex information on the road surface and better avoid potential risks.

The high-precision positioning system provides accurate road location information, relying on technologies including Beidou satellites, laser radar point cloud positioning, and visual positioning.

The perception system provides information on surrounding obstacles and traffic participants, including their speed, position, orientation, boundaries, etc.

The decision-making planning system uses the surrounding environment and road system to make the final driving judgment, decide how to give way and overtake, and how to plan the driving trajectory of the vehicle.


1.4 Core Breakthrough of Autonomous Driving


Breakthrough 1: Massive data input

The autonomous driving system requires continuous scale testing, data collection, and algorithm training based on massive data to provide an important basis for vehicle perception, positioning, and route planning. This requires "feeding" a large amount of data. As the amount of data increases, so does the accuracy of algorithms for autonomous driving.
The data collection methods include: the vehicle is equipped with sensors such as cameras, millimeter radar waves, and laser radars to conduct roadside inspections in various places, various road conditions, and various climatic conditions. The test data volume of a car can reach 10TB a day.

There are generally three modes for autonomous driving companies to collect data:

  • Asset-heavy model: the company purchases vehicles, refits them into self-driving vehicles, and travels to collect data;
  • Shadow mode: add sensors to the car to collect data related to the user's driving scene (suspected of violating personal privacy);
  • Virtual simulation: Copying the real traffic environment, physical rules, and operating logic into the virtual world can greatly improve the efficiency of autonomous driving testing and reduce testing costs.


Breakthrough 2: Data-driven algorithm iteration

Mining effective difficult scene data to drive algorithm iteration.


Breakthrough 3: Computing power supports algorithm training

The biggest challenge to computing power for autonomous driving comes from algorithm training. The complex program of the algorithm model shows an exponential growth trend, constantly approaching the upper limit of computing power.


Breakthrough 4: Another way of vehicle-road coordination

Vehicle-road coordination can make autonomous driving safer and more economical. Vehicle-road coordination will alleviate the high cost of a single vehicle, provide more secure safety redundancy, and make traffic travel smarter and more convenient. Car countdown".


2. Vehicle-road coordination

insert image description here

2.1 Intelligent collaboration


Vehicle-road collaboration is like street lights, and bicycle intelligence is like car lights. Under the synergy of the two, the threshold for commercialization of autonomous driving can be greatly reduced, and the transformation from single intelligence to collaborative intelligence can be accelerated. - "Key Technologies and Prospects of Vehicle-Road Collaboration for Autonomous Driving" (White Paper), 2021


bicycle intelligence

Relying on the vehicle's own vision, millimeter radar waves, lidar waves and other sensors, computing units and wire control systems for environmental perception, calculation decision-making and control execution.


Vehicle-road coordination

Upgrade the road end to the same level of intelligence as the vehicle end, and organically combine the traffic participation elements of people-vehicle-road-cloud through the Internet of Vehicles, so as to ensure the safety of automatic driving and accelerate the maturity of automatic driving applications.

There is a 90/10 theory in the field of autonomous driving , which refers to the last 10% of long-tail problems, which may require 90% of the effort or even more. Vehicle-road coordination can solve the long-tail problem of autonomous driving.


2.2 Composition of vehicle-road coordination

insert image description here

1. Communication platform

Vehicle-to-vehicle communication and vehicle-to-road communication require: a network environment with low latency, high reliability, and fast access to ensure real-time information interaction between the vehicle and the roadside.

There are two standards for the underlying communication technology:

  • Dedicated short-range communication: mainly for WiFi technology in low-mobility scenarios (automatic payment without parking, access control, fleet management, vehicle identification, etc.), the test performance is not stable, and the reliability is poor in high-speed and high-density scenarios. Delay jitter is large.

  • C-V2X: This technology is based on the evolution of cellular network communication technology, with strong mobility and reliability; the most important point is that it is compatible with the 5G evolution route and can support automatic driving. The large-bandwidth, low-latency, and high-speed wireless communication environment provided by 5G will greatly improve the information and data transmission between vehicles.

2. Terminal layer

The terminal layer is divided into vehicle terminals and roadside terminals.

1) Vehicle terminal

It mainly includes communication chips, communication modules, terminal equipment, V2X protocol and V2X application software.
The vehicle-mounted terminal is responsible for the real-time processing of massive data and the fusion of multi-sensor data at the vehicle-mounted terminal to ensure the stable and safe driving of the vehicle in various complex situations. With the help of the current mainstream LTE-V2X and the new generation of 5G-V2X information communication technology, the vehicle-mounted terminal can realize comprehensive information interaction between vehicles, before the road, between vehicles and pedestrians, and between vehicles and the cloud. It can be said that the terminal is the hub of communication between the in-vehicle network and the external network.

On-board unit: It is the central communication unit of the vehicle and one of the key devices for realizing V2X communication between the car and the outside world. It is connected with the drive test equipment to read, receive, and send data.


2) Intelligent roadside

Responsible for road condition information collection and edge side computing, complete digital perception of road conditions and deployment of nearby cloud computing power. The roadside unit has three functions of traffic information collection, transmission and processing. It is the core infrastructure of the vehicle-road coordination system and an information exchange hub for sensing road network characteristics and road participants.

Roadside communication unit: responsible for communicating with the on-board unit and roadside computing unit, which is equivalent to a mobile base station.

Roadside computing unit: It acts as a limbic brain, receiving information from the roadside perception unit, receiving information from the on-board unit and other roadside computing units, and then performing a series of processing such as analysis, detection, tracking, and identification. The core components include: collection and sensing, calculation and decision-making, communication convergence, security authentication, status detection and other modules.

Roadside perception unit: includes environmental information such as radar, camera, traffic lights and signs.


3. MEC edge computing

Edge computing refers to a computing model that distributes tasks such as computing, storage, and communication to the edge of the network near the edge of the application scenario, and provides edge intelligent services nearby.

The edge computing server can take advantage of the advantages of short-distance deployment, obtain road condition information in a timely manner, and distribute it to different systems according to different types of road condition information:
if it is an emergency, it will be sent directly to the vehicle/road equipment to remind all parties to deal with it in time
; The data that may affect the overall situation is reported to the central cloud, and the central cloud computing decides whether to send it additionally. At the same time, the coordination central cloud draws the overall traffic situation map.


4. Cloud control platform

The cloud control platform includes a cloud control basic platform and a cloud control application platform.

The cloud control platform provides equipment management and control, data fusion and cloud data exchange, global event information release, provides services for different levels of intelligent network connection and autonomous driving vehicles, and provides management and service organizations with vehicle operation, basic equipment, traffic environment, traffic Dynamic basic data such as management is a cloud support platform that supports the actual application requirements of intelligent networked vehicles.


2.3 Integration of Autonomous Driving and Vehicle-Road Collaboration

.insert image description here


1. Full traffic element sensing


Single-vehicle autonomous driving is affected by factors such as the perception angle of the vehicle-side sensor and the real-time movement of the vehicle. The speed of the low-speed vehicle detection on the roadside by the autonomous driving vehicle is inaccurate, such as slowly reversing on the roadside, and the vehicle is driving out of the roadside.

Perception and positioning of all traffic elements, including static blind spot/occlusion cooperative perception, vehicle over-the-horizon cooperative perception, and roadside low-speed vehicle detection. The perception and positioning of all traffic elements can assist single-vehicle automatic driving to avoid the above-mentioned defects.


2. Road traffic incident perception


The perception angle of a single-vehicle self-driving vehicle is limited, and the accurate detection of low obstacles requires a relatively short distance to be realized, which may easily cause the vehicle to brake suddenly.

Road traffic event perception, including illegal parking, "dead car" event recognition, queuing event recognition, and road spill (cone bucket, cargo) event recognition.


3. Fusion perception of roadside signal lights


Single-vehicle self-driving vehicles are obtained through visual AI, but there are still many shortcomings in this method, and the ability to recognize signal lights is limited, specifically in: 1)

Special-shaped signal lights cannot
be recognized;
3) It is easily restricted by the external environment, especially in backlight, fog, dust, night and other environments, and the data dimension of recognition is limited; 4
) The countdown information is not accurately recognized.


The fusion perception of roadside signal lights can transmit the data of intersection signal lights to the vehicle terminal, so that the vehicle can make timely driving decisions:

1) Even if the front is blocked, the vehicle can still use the real-time intersection light status and release countdown data returned by the roadside equipment Make correct predictions;
2) In addition to real-time light status data and countdown data, the operation plan of the signal machine can even be given to the vehicle terminal; the advantage of this is that after the vehicle passes an intersection, it can be combined with the next intersection signal light According to the operating plan and the distance between two intersections, a prediction is made, the speed of the vehicle is adjusted, and a green wave is realized in the individual sense.


2.4 Challenges faced by vehicle-road coordination


Challenge 1 : The complex system formed by the deep integration of autonomous driving and vehicle-road coordination needs to build a system engineering-based functional safety and expected functional safety system.

The complex system formed by autonomous driving and vehicle-road coordination needs to solve a series of problems such as large-scale mobile access, multi-level interoperability, low latency, high safety and reliability, especially for various complex scenarios. It is necessary to clarify the system architecture, system functions, application scenarios, and service content, and put forward clear functional requirements, performance requirements, data requirements, and safety requirements for system facilities to ensure the safety and reliability of vehicle-road collaborative autonomous driving.


Challenge 2 : The development of road intelligence and driving intelligence is not coordinated enough. It is necessary to build high-level intelligent roads to serve vehicle-road collaborative automatic driving, intelligent traffic management, and smart city construction.


Challenge 3 : It is necessary to explore more efficient and economical vehicle-road communication technology solutions to solve a series of problems such as low vehicle penetration and difficulty in large-scale commercial promotion.


Challenge 4 : Vehicle-road collaborative autonomous driving requires cross-industry and cross-regional interconnection, and continuous exploration and development of application service innovation and business model innovation. In terms of interconnection and interoperability, there are still many influencing or restricting factors in the specific promotion process of vehicle-road cooperative autonomous driving, such as open application of vehicle data, reuse of road perception facilities, use of road signal control data, and opening up of road toll systems, etc., which need to be carried out in depth Research and progress step by step.


Challenge 5 : Policies, regulations and standard construction are the key factors leading and supporting the development of vehicle-road collaborative autonomous driving. Research, formulation and revision of relevant laws and regulations should be carried out in advance according to the different stages of the development of vehicle-road system automatic driving.

Guess you like

Origin blog.csdn.net/locahuang/article/details/126009904
Recommended