L3 level autonomous driving

L3 level autonomous driving

 

On March 9, 2020, the Ministry of Industry and Information Technology announced the approval of the recommended national standard for "Automotive Driving Automation Classification" on its official website, and plans to implement it on January 1, 2021.
The classification standards for autonomous driving developed in accordance with China's own standards have finally landed in the end.

 

 China Autonomous Driving Classification (public version)


The industry response was enthusiastic enough. The day after the standard announcement, Changan officially released the mass production L3 conditional autonomous driving system and announced that the system will be equipped with the new model UNI-T just released. Zhu Huarong, the head of the team, personally appeared as the anchor, saying that "autonomous vehicles have been able to allow users to" get off their feet "," get off their hands ", and" take off their eyes. "While
Changan started the gun, FLAG had to land in 2020. Other auto companies with advanced automatic driving have also secretly accelerated, including giants such as Geely, SAIC and GAC, as well as upstarts such as Chinese Express and Tesla who hold high the banner of intelligent and autonomous driving.
In the autonomous driving pedigree, what kind of "glamour" does the L3 level have to make various car companies competing?

L3


In the Ministry of Industry and Information Technology ’s publicity standard, L3 is characterized as “automation under limited conditions”: under the operating conditions specified by the automatic driving system, the vehicle itself can complete the tasks of steering, acceleration and deceleration, and road condition detection and response; under certain conditions, the driver can Fully hand over driving rights to autonomous vehicles, but need to take over if necessary.
In other words, in the L3 level automatic driving state, the driver can not only "hand off" and "foot off", but also "eye off", that is, he does not need to supervise the vehicle at all times, but only needs to be able to take over driving tasks dynamically.
In this sense, L3 is a watershed. In the front, the driver is the responsible subject, and the machine is the auxiliary; in the back, the machine is the responsible subject, and the driver gradually breaks away from the driving task. Across the L3, waiting ahead is the "Kangzhuang Avenue" for autonomous driving.
However, it is precisely this transitional nature that has led to the controversy of L3 autopilot, and has even been criticized as tasteless. L3 not only has space for automatic driving of vehicles, but also requires human drivers to maintain a state that can be taken over. This undoubtedly adds a lot of uncertainty to the driving process, and the division and definition of human-machine responsibilities have become a problem. Google even bypassed L3 because it was difficult to find a solution and set its goal to achieve L4 (SAE).

L3 automatic driving scene and function definition: high-precision map full-speed automatic driving in all sections

L3 automatic driving function and scene definition-high-precision map full-range automatic driving in full speed, that is, China's high-speed and urban express roads covered by high-precision maps can realize automatic driving in the 0-120km / h full-speed range.

The biggest difference between L3 and L2 is that the system can replace people to become the driving subject, so that they can get rid of their hands and feet, and automatic driving in the full speed section of the whole road section requires a strong perception technology. L2-level automatic driving mainly uses "radar + camera" for environment perception, but the sensor capabilities are limited, and it is easily affected by severe weather, which cannot meet the basic needs of L3. To achieve L3 autonomous driving, a comprehensive upgrade of the software and hardware of the perception system is required.

High-precision maps are top-notch black technologies for autonomous driving. They have the advantages of more accurate positioning, stronger perception, and more advanced prediction.

1. More accurate positioning: positioning accuracy reaches 0.1 meters, real-time access to the vehicle's lane and positioning information

2. More powerful perception: In extreme environments such as severe weather, when the sensor fails, the map data can be used to supplement the 1km road ahead

3. Pre-judgment is more advanced: real-time pre-judge the path of 1 km ahead, using map data to provide long-distance perception of super vision and super sensor boundaries.

These advantages of HD maps have effectively supplemented the defects of vehicle sensors, so HD maps are the key to achieving L3 autonomous driving.

The combination of the top-level "high-precision radar + Mobileye EyeQ4 camera", together with high-precision maps, realizes the world's only "triple perception" technology, providing L3 and above high-level autonomous driving with a threshold core technology of perception system redundancy.

The latest generation Mobileye Q4 V9.3.1 version chip. It has the advantages of seeing further, responding faster, and adaptability.

See farther: the position and speed of multiple detection vehicles and pedestrians around 200 meters, exceeding the industry level by 40 meters, the industry's first;

Faster reaction: pedestrians can be identified with only 50% of features, and the braking reaction time is only 0.3 seconds, which is 3 times faster than people;

Strong adaptability: better cope with complex weather such as rain, fog, sand and haze, adapt to changing scenarios, and effectively improve safety.

Compared with the older version of the Eye Q4 chip, the V9.3.1 version of the Eye Q4 has the advantages of more accurate identification, faster calculation, and safer functions.

More accurate recognition: The recognition accuracy of the vehicle position information is further improved. By fusing the image recognition information, the errors of the radar sensor are supplemented and corrected to avoid the vehicle alarm caused by the wrong position information;

Faster operation: The computing power of the chip is further improved. Even in complex scenarios, when a large amount of sensor data to be calculated is generated, the environment can be quickly and accurately judged to avoid problems such as slow operation speed or even memory overflow to ensure the system Make decisions and reactions at the first time;

Safer function: The safety protection performance for emergency situations is further improved, and the latest crash test functional safety requirements are integrated to ensure that the system can still work normally after the vehicle accident, to avoid system circuit instability or even power failure affecting safety.

 

Guess you like

Origin www.cnblogs.com/wujianming-110117/p/12729912.html