Introduction to autonomous driving 1

Source of this article:

AI future progress/Li Kaifu, Chen Qiufan.—Hangzhou: Zhejiang People's Publishing House, 2022.5

ISBN 978-7-213-10162-5

An autonomous vehicle, also known as an unmanned vehicle, is a vehicle that does not require active human operation and can complete driving tasks under computer control.

Driving behavior is a complex system engineering, and it takes an average of about 45 hours for humans to learn how to drive a car.

The entire driving process includes:

Perception (binocular observation, binaural listening);

Navigation planning (associating the physical environment with the route in the mind or the specific location on the navigation map, and determining how to get from point A to point B);

Reasoning (predicting the intentions and likely actions of pedestrians and other vehicle drivers on the road);

Decision-making (deciding what driving behavior to take in accordance with traffic rules based on the actual situation, such as the driver making the decision to slow down immediately after being prompted to speed);

Vehicle control (accurately implementing the human brain's intentions into physical behaviors such as turning the steering wheel and applying the brakes).

Autonomous driving uses AI to replace humans in driving vehicles. It relies on neural networks instead of human brains, and mechanical hardware is responsible for execution rather than human hands and feet. For example, AI perception requires understanding and grasping the conditions of the surrounding environment through cameras, lidar and other sensors; AI navigation planning is to associate points on the three-dimensional road with points on the high-precision map one by one to complete the route. Planning; AI reasoning, which requires the use of algorithms to predict the intentions and actions of pedestrians and vehicles; AI decision-making, such as what a vehicle should do when it detects an obstacle, and what it should do after the obstacle is removed, etc. It relies on experts to develop rules or statistical estimates. The development of autonomous driving has gradually matured from initially relying on the assistance of human drivers to eventually achieving fully driverless driving.

Current stage: Passenger scenarios are gradually moving from L2 to L3; commercial scenarios L4 will be implemented first.
Ultimate form: fully autonomous driving (taking into account both travel safety and travel efficiency).
Ultimate goal: subvert the traditional way of human travel.
Insert image description here

The Society of Automated Engineers (SAE International) divides autonomous driving into six levels (Level) from L0 to L5 based on the degree of AI participation in driving. The details are as follows.

L0 ("human driving without automation"): Human drivers assume all driving tasks, and AI will observe the road and remind the driver when necessary.

L1 ("humans can't let go" assisted driving): With the permission of the human driver, AI can complete specific driving operations, such as steering.

L2 ("human hands-off" partial autonomous driving): AI can take on multiple driving tasks, such as steering, braking, and accelerating, but human drivers still need to monitor the driving environment and take over the vehicle when necessary.

L3 (Conditional Autonomous Driving with "humans look away"): AI can take on most driving tasks, but requires a human driver to take over if the AI ​​encounters a situation it can't handle and makes a request (there are some people who are skeptical about L3 attitude that a human driver's sudden takeover of a vehicle increases the likelihood of a hazard, rather than reducing the risk).

L4 (Highly automated driving with “human relaxed brain”): AI can completely take over the vehicle during the entire driving process, but only if the vehicle is in an environment where the AI ​​can fully understand its situation and deal with its problems, such as being covered by a high-precision map City roads or highways.

L5 (fully autonomous driving without a steering wheel): No human driver is required to participate in driving operations regardless of the vehicle's environment. More concretely, we can think of autonomous driving from L0 to L3 as an additional function of a new car, which is equivalent to an additional AI tool for human drivers. However, its role in future transportation changes is limited.

Starting from L4, vehicles begin to have their own "brain", which will have a revolutionary and profound impact on human transportation. It is conceivable that in the future, L4 self-driving buses will transport passengers back and forth along fixed routes; L5 self-driving taxis can allow passengers to call through taxi-hailing software (such as "Didi Chuxing") and arrive quickly.

When will true autonomous driving appear?

Today, autonomous driving from L0 to L3 has been implemented in commercial vehicles. Since the end of 2018, some L4 autonomous vehicles have also undergone road testing and trials in limited areas in some cities. However, for now, L5 autonomous driving (and the less restricted L4 autonomous driving) is still out of reach.

One of the main difficulties in realizing L5 autonomous driving is that the AI ​​system needs to be trained on a large amount of data, and the data must come from ever-changing real driving scenarios. In this way, there are many categories of required scenes, the data magnitude is very large, and the data dimensions are very wide. However, it is necessary to collect all the data of all objects on the road under all circumstances (such as placement methods, movement directions, etc.) Quite unrealistic.

Of course, there are ways to deal with some of the "long tail" characteristics of the situation. For example, we can add virtual chasing children, slowly walking old people, or even puppies that suddenly jump out in different scenes, use synthetic simulation to expand data coverage and diversity, and then just like in "Holy Driver" Just like Marr practices his driving skills through driving games, he trains the AI ​​program through these simulated scenarios. We can also specify certain rules for the AI ​​system in advance (for example, when a vehicle encounters a four-way stop marked intersection, the order of travel is first come, first served), without having the AI ​​system relearn traffic rules from the data. However, these solutions are not panacea, the quality of synthetic data cannot be compared with the quality of real data, and the human-made rules may be wrong or contradictory.

The biggest challenge in realizing L5 autonomous driving is that a small mistake may cause irreversible consequences. If Taobao’s AI fails to accurately recommend a product, it’s no big deal; but if the self-driving system makes an error, it could cost you your precious life.

Faced with these objective challenges and problems, many experts believe that it will take at least 20 years to achieve L5 autonomous driving. I believe that a more effective way to speed up this process is to boldly transform existing urban roads and related infrastructure. Under normal circumstances, we imagine L5 autonomous driving based on current urban roads. However, if we have "enhanced urban roads" that can embed sensors and wireless communication devices, can the roads actively "tell" vehicles that there are dangers ahead, or allow vehicles to "see" road conditions beyond their field of vision? If we could plan a new city's roads into two layers - one for vehicles and one for pedestrians (to prevent vehicles from hitting people), would the driving environment for fully autonomous vehicles be completely different? By rebuilding the infrastructure, we can significantly improve the safety of Level 5 autonomous vehicles and get them on the road sooner by minimizing the likelihood of pedestrians walking around them.

On the upgraded and enhanced urban roads, the vehicle's autonomous driving system can communicate seamlessly with the information flow of the real environment, so the vehicle can be dispatched in real time, just like what is depicted in "Holy Driver", which can avoid running a marathon. Thrilling scenes such as crowds of people and speeding ambulances. If our future urban roads build intelligent transportation networks and have matching high-performance autonomous vehicles, the L5 autonomous driving era may arrive earlier.
Insert image description here

Insert image description here

It should be noted that even if L5 autonomous driving driven by AI becomes more mature and safer, there are still some situations that AI cannot handle perfectly. For example, a sudden explosion destroyed a certain road, and the electronic navigation map that failed to update in real time still instructs the self-driving vehicle to continue moving forward. At this time, what should the vehicle do?

Or, where should autonomous vehicles go when extreme natural disasters such as earthquakes occur? In these situations, the best solution is to immediately "summon" a professional human driver to take over the vehicle. Of course, it is impossible to instantly move rescue troops to distant places across time and space, but if we "copy and paste" the current traffic scene to a remote operation center, human drivers can operate in an independent "remote cockpit" there. Remote control operation. We can use augmented reality (AR) technology to project a vehicle's environment (done with cameras on the self-driving vehicle) and send these distant views to a remote cockpit.

Next, the operational actions taken by the human driver based on the distant view (such as turning the steering wheel or stepping on the accelerator) will be transmitted to the autonomous driving system to control the vehicle. This is how Chamar in the story "Holy Driver" was able to drive a real vehicle while in a remote cockpit. In this process, long-distance transmission of high-fidelity video images with minimal delay requires a large amount of bandwidth, but this will not be a problem in the future.

Now, 5G networks have begun to demonstrate this potential. According to the development speed of the 10-year generation, by 2030, we will enter the 6G network era. By then, the low latency required for this kind of remote driving will no longer be a threshold. L5 autonomous driving, enhanced urban roads, and the 6G network that transmits AR video and connects to remote operation centers will achieve technological integration, and it is expected that experimental deployment will be carried out around 2030. We predict that with the iterative upgrade of technology, L5 autonomous driving will be implemented safely on a large scale around 2040. However, it should be noted that making this prediction is based on the assumption that issues related to ethics and responsibilities and obligations have already been solved.

  • Zhongli’s autonomous forklift (AMR) can drive completely autonomously in a warehouse or factory
    Insert image description here
    (Photo source: Zhongli Forklift)

Uisee Technology's autonomous freight vehicles use driverless driving to transport luggage between the airport and the passenger terminal
(Photo source: Hong Kong International Airport) Insert image description here
In the next few years, autonomous driving technology from L0 to L4 will gradually be used in increasingly complex applications. During this process, the AI ​​system will continue to collect data and make improvements, thus promoting the maturity of L5 autonomous driving technology.

In fact, the simplest self-driving technology has already been applied in our lives. For example, automated warehouse robots, automated forklifts and automated guided vehicles, most of which operate indoors and are used in specific industrial scenarios, while autonomous freight trucks and fixed-route autonomous shuttles have also been deployed in mines and airports.

In addition, autonomous driving technology is already better than human drivers in some highly predictable environments. Currently, vehicles that have adopted this technology include self-driving trucks running on highways, airport-to-hotel shuttles or self-driving buses that follow fixed routes. Each of the landing scenarios mentioned above will collect more data to improve the AI ​​algorithm and cover all possible algorithm paths, thereby reducing the probability of accidents and laying a more solid foundation for the arrival of the L5 autonomous driving era in the future. Foundation.

For the current status, please refer to:
Insert image description here
Insert image description here
Insert image description here
Insert image description here
Summary | Gradually transforming from an early stage to a mature market, data accumulation accelerates technological breakthroughs
✓ The electrification trend has been set, and the next stage of competition in the automotive industry will come from intelligence, with intelligent driving as the core: 1) Gradual Type route: The L3 function is initially introduced, but it
is currently limited to highways and urban expressways (areas covered by high-precision maps). The next phase will mainly expand urban areas while improving functional continuity; 2) Leap-forward route: Commercial scenarios will
be implemented first, and transportation Things are faster than moving people, and low speeds are faster than high speeds. Leading companies are opening up technical solutions to output dimensionality reduction and empowerment
✓ Optimistic about progressive hardware pre-embedding + leapfrog specific scenario acceleration. We believe that the impact of regulatory lag is limited. With the car companies, technology Internet giants are making efforts
to achieve technological breakthroughs in smart driving faster than expected, and 2022 will be a key year for the implementation of L3:
 Highlight 1: Massive data accumulation, rapid software iteration, covering Corner Case long-tail scenarios
1) Software development methods have changed from the 1.0 era Logic-driven is gradually turning to data-driven in the 2.0 era, and data (high quality + multi-dimensional) has become the new "fossil fuel" for smart cars.
2) In addition to computing power, algorithms, and data, automatic annotation + data closed-loop capabilities are also the core
Highlight 2: Deep hardware pre-embedding, chip computing power upgrade, lidar front-mounted mass production
1) High-performance perception sensor: lidar + high-definition camera + 4D millimeter wave radar + high-precision positioning, perception solutions and leap-forward autonomous driving are gradually converging
2) Large Computing chips and computing platforms: the "digital engine" of intelligent driving. NVIDIA has obvious short-term relative competitive advantages with its high degree of openness and stable tool chain Highlight
three: high- and low-order two-way interaction, high-order dimensionality reduction and empowerment, low-order data Feedback
 Point 4: Software charges, service subscriptions, and business model transformation
services run through the entire life cycle of the car. According to Tesla’s 2020 annual report, related deferred income is US$1.93 billion, and Xpeng Xpilot3.0 subscription revenue is also expected to reach US$5,000. Above 10,000 yuan
, software revenue will become an important component of car companies’ revenue in the future.

The impact of returning to Level 5 autonomous vehicles

The successful launch of L5 autonomous vehicles on the road will bring about a disruptive transportation revolution - on-demand vehicles will deliver passengers to their destinations with greater convenience, lower cost and higher safety. We can imagine the following scenario. Your schedule shows that you need to go out to attend a meeting in an hour, then you can definitely book a "Didi Chuxing" self-driving taxi on the mobile app. Didi Chuxing's AI algorithm will dispatch self-driving fleets in advance near the flow of people who are predicted to need rides.

For example, at the end of a concert, send a fleet of vehicles near the concert venue. The intelligent dispatching system will use algorithms to find the optimal solution that minimizes the empty rate of the autonomous driving fleet (taking into account the user's waiting time, vehicle empty time and charging time). Without human drivers, fully autonomous fleets managed by AI will achieve excellent utilization rates. The realization of self-driving shared cars will save a lot of money for hiring human drivers, which will reduce the cost for consumers by nearly 75%, thus further attracting consumers to choose self-driving shared cars for travel instead of buying their own cars. A human driver may need to accumulate 10,000 hours of driving experience to become a skilled driver, but a self-driving vehicle may have 1 trillion hours of driving experience because it can learn from every vehicle and Never forget, never tire. So, in the long run, we can indeed expect greater safety from autonomous driving. But how will self-driving vehicles be legal on the road in the short term? The government will only approve the popularization of autonomous driving under the premise that it is "safer than human driving." Today, about 1.35 million people die in car accidents around the world every year, including nearly 100,000 in China.

Therefore, any self-driving technology must be proven to be at least as safe as human driving. When the first self-driving vehicle that is “safer than human driving” is launched, AI will continue to learn more data and continuously improve itself. In 10 years, the number of people killed in car accidents is expected to have dropped significantly.

According to statistics, Americans spend an average of 8.5 hours a week driving, and in the future era of autonomous driving, people will gain these 8.5 additional hours of precious time: the interiors of autonomous vehicles will be reconfigured, and people will be able to You can work, communicate, entertain, and even sleep in the car. Since many daily trips are made by one or two people, shared self-driving vehicles can be designed as small cars. But even a small car can be outfitted with comfortable recliners, a well-stocked refrigerator with drinks and snacks, and a large screen for video calls or entertainment. AI is characterized by its virtuous cycle: more data leads to better AI, more effective automation leads to higher efficiency, more frequent use leads to lower costs, and more time leads to higher productivity. These will develop into a mutually reinforcing virtuous cycle and accelerate the popularization of autonomous driving technology.

As the level of automation and communication technology improves, self-driving vehicles will be able to communicate with each other quickly, accurately, and easily. For example, a vehicle with a flat tire can tell nearby vehicles not to approach; a vehicle that is overtaking can tell it to stay away; The trajectory is accurately transmitted to nearby vehicles, so two vehicles can be just 5 centimeters apart without causing any scratches; when a passenger is in a hurry, the vehicle he is riding in can provide other vehicles with incentives to slow down and give way (e.g. Pay the other party 1 yuan) and try to get the other party to allow you to overtake. In the process of these travel changes, new transportation infrastructure dominated by AI driving will be created, and human driving will become a safety hazard on the road. In a few decades, human driving may become a violation. Perhaps starting with a ban on driving vehicles on highways, eventually humans will be legally prohibited from driving vehicles on all public roads. By then, car lovers may have to go to private entertainment areas or racetracks, just like equestrian enthusiasts. Only then can I touch the steering wheel.

As self-driving vehicles and technology and shared car services become more mature, fewer and fewer people will buy cars (which actually reduces household expenses). Future shared self-driving vehicles can operate efficiently around the clock without needing to park, and the total number of vehicles will be significantly reduced, so we will almost no longer need parking lots. According to statistics, currently, vehicles spend 95% of their time idle in parking lots. In this case, the existence of many parking lots is actually a serious waste of land resources. Overall, these changes brought about by shared autonomous vehicles will reduce traffic congestion, reduce fuel consumption, improve air pollution, save urban space, and make people's lives and the global environment better. Of course, while productivity is improving, other aspects of human society will also be severely impacted. First, in the era of autonomous driving, drivers of vehicles such as taxis, trucks, buses and delivery vans will largely be underappreciated.

Currently, millions of people in China make a living by driving taxis or trucks, and many people work as part-time drivers in express delivery, logistics and other industries, and people engaged in these jobs will gradually be replaced by AI. Secondly, some traditional occupations will also be subverted by autonomous driving. Since the new generation of cars will be driven by electronics and software and will no longer rely entirely on mechanical parts, employees engaged in automotive mechanical maintenance will need to relearn professional knowledge in electronics and software; there will be a significant reduction in gas stations, parking lots and car dealers, related to this employees will be significantly reduced. In short, the work patterns that many people rely on to support their families will be completely changed, just as people's mode of travel evolved from horse-drawn carriages to cars.

Non-technical problems hindering L5 autonomous driving

In the process of popularizing autonomous driving, we need to solve many challenging non-technical problems, such as ethics, responsibilities and obligations, and public opinion. This is to be expected, as the lives of more than a million people are closely related to this, not to mention that autonomous driving will bring changes to all walks of life and affect the jobs of hundreds of millions of people.

In some cases, vehicles may also be forced to make painful ethical decisions. The most famous ethical dilemma is the "trolley problem": a trolley is out of control and is about to kill A and B. As the driver, should you pull the lever to let the out-of-control trolley switch tracks and kill them on another track? What about C? If you think the answer is obvious, what if C is a child? What if C is your child? What if this car is your car and C is your child? Now, if a human driver's behavior results in a crash resulting in death, they need to respond to a judicial process that will determine whether they acted appropriately. If they are found to have acted inappropriately, the consequences can be imagined.

But what if AI causes death? Can AI itself explain its decisions with reasonable and legal reasons that can be understood by humans? You know, "explainable AI" is difficult to achieve, because AI is often trained through data, and the answer to AI is a complex set of mathematical equations that need to be highly simplified before it can be understood by humans.

Moreover, some AI decisions are actually "foolish" in the eyes of humans (because AI lacks human common sense). Conversely, some human decisions are also "unfathomable" in the eyes of AI (because AI cannot understand, Why do humans engage in stupid behaviors such as drunk driving or fatigue driving that harm others and themselves). There are other issues involved here, including that autonomous driving saves people millions of hours of driving time, but at the same time affects the livelihoods of millions of human "professional drivers". What should we do in between? Good balance? Maybe in five years, AI has accumulated billions of kilometers of driving experience, the safety of autonomous driving has improved, and the 1.35 million deaths caused by car accidents can be halved. However, during the transition period, AI may make some human driving mistakes. Is this a mistake that a team member would not make? Is this acceptable?

The fundamental question here is, should we let a machine make decisions that could endanger human lives? If the answer is absolutely no, then maybe we should end research on autonomous driving altogether. Life is precious. Clearly, every company involved in autonomous driving must proceed with caution.

To address this problem, there are currently two typical approaches. First, be cautious before launching self-driving products and slowly collect data in an absolutely safe environment to avoid any fatal accidents. This is the approach of Waymo, the self-driving company owned by Google. Second, launch self-driving products as soon as possible when it can only be said to be relatively safe to expand the scale of real data collected - you know, although this approach may lead to more fatal accidents in the beginning, But in the future, AI systems are bound to save more lives. This is what Tesla does. Which of these two approaches is better? Even if it is evaluated by two very rational people, they may have different opinions.

Another important question: If someone dies in a car accident, who is responsible? Is it a car manufacturer? Is it a supplier of AI algorithm software? An engineer who writes algorithms? Or a human driver who takes over the vehicle if necessary? We don’t have a clear answer now, but we need clarity soon. Throughout history, we know that only by clarifying responsibility attribution can new industry rules be established around responsibility attribution. For example, credit card companies are responsible for losses caused by fraud, not banks, stores, or credit card holders; this rule allows credit card companies to charge fees to other parties and use such revenue to prevent fraud, thereby successfully establishing Start the credit card ecosystem and business model.

In the same way, the era of autonomous driving is coming, and relevant agencies need to clarify the responsibility for traffic accidents as soon as possible. Assuming that the liability lies with the supplier of the AI ​​algorithm software, if the software developed by Waymo causes a fatal accident, how much compensation can the family of the deceased make against Waymo’s parent company Alphabet (the parent company of Google)? You must know that the market value of the latter has exceeded 1 trillion US dollars, and the problem will become very difficult if it is not ruled out that someone will ask for an exorbitant price. Therefore, on the one hand, we need to clarify the legal provisions to protect the rights and interests of victims of software defects, and on the other hand, we need to ensure that technological progress is not stalled by excessive claims. Finally, traffic accidents (other than serious accidents) rarely make headlines these days. However, when an Uber self-driving vehicle killed a pedestrian in Phoenix, USA, in 2018, the accident quickly became a global issue within days. Headlines from major media outlets.

There may be problems with Uber's autonomous driving system, but if the media reports on every fatal accident caused by autonomous driving in the future, it may cause great public pressure on the new technology.

In addition, if the media uses condemning headlines for every report of a fatal accident caused by autonomous driving, it may completely destroy public confidence in the autonomous driving industry in the short term, even if in the long term, autonomous driving is expected to save the world. Millions of lives. In fact, all of the above issues may cause public panic, prompt the government to strengthen supervision of new technologies, or make the government's strategy for promoting new technologies more conservative, and may delay the timetable for the implementation of autonomous driving as I previously predicted.

The above-mentioned issues regarding career elimination, ethics, legal accountability, public opinion, etc. are all legitimate and difficult issues. I believe we need to raise awareness, encourage full discussion among all parties involved, and develop workable solutions to these difficult issues as quickly as possible. Only in this way, when autonomous driving technology matures, can we be prepared at all non-technical levels to welcome its arrival.

Guess you like

Origin blog.csdn.net/dongbao520/article/details/135222806