Self-driving cars: Artificial Intelligence’s most challenging task

It is said that self-driving cars are the dream state of the automobile industry and will completely change the transportation industry. Just a few years ago, hype about self-driving cars was all the rage, so what’s really going on? Where is the self-driving car revolution that so many companies are touting we will have by 2021? It turns out that building self-driving cars is much harder than imagined. Let’s take a look at where autonomous vehicles are today, why it’s one of the most challenging tasks of our time, and what we can do about it.​  

The development status of autonomous vehicles

Autonomous vehicles have a huge future: they will transform our roads and create a safer driving experience. After all, statistics show that more than 90% of traffic accidents are caused by human error. Back in 2015 or 2016, many automakers announced major plans to have fully autonomous commercial vehicles on the road in the next few years, but we are well past their original expectations. It's an exciting time for the automotive industry, but the hype far exceeds reality. So what progress has been made in fully autonomous vehicles? Using SAE's (Society of Automotive Engineers) widely accepted Driving Automation Ratings helps us evaluate the progress of autonomous vehicles. Automation can be divided into five levels, from level 0 (no automation) to level 5 (full automation).

  • Level 0: No automation (the driver has full control of the car)
  • Level 1: Driver Assistance
  • Level 2: Partial automation
  • Level 3: Conditional Automation
  • Level 4: Highly automated
  • Level 5: Full automation (self-driving cars)

Currently, most cars sold are at least Level 1, which provides some driver assistance features. These features include lane assist or adaptive cruise control. Tesla Autopilot is Level 2, which means it can control the steering wheel and speed, but the driver still needs to pay close attention to the situation and be ready to drive manually at any time . Honda launched a model in March 2021 that has reached Level 3-Legend Sedan, which only requires manual driving by the driver under very specific conditions. As for Level 4, there are several companies making progress on this front: General Motors, Daimler, and Google's progress is all noteworthy. For example, Google Waymo enables fully autonomous driving within specific geofences, namely certain suburbs in Arizona and several other controlled locations. We expect thistechnology to be available in 2024 and 2025. Currently, there are no cars on the market that have reached Level 5 autonomous driving, and companies have delayed their deployment schedules after recognizing the huge challenges inherent in fully autonomous driving. One positive outcome of this incremental development is that cars will gradually increase their level of automation, rather than all at once, helping to build customer trust. It’s hard to say when we’ll see the driverless car revolution. Rather than making more predictions that may not be realized, we should focus on solving the challenges of achieving specific goals.  

Why is building self-driving cars so challenging?

Ultimately, the problem is that it's extremely difficult to build a fully autonomous car that can adapt to every situation. This is more complicated than auto experts realized when they started forecasting, so companies are either pushing back their timelines, selling off their self-driving car divisions, or revamping their development methods. Let’s talk about why self-driving car projects are so difficult:

  • The world is too complex. Autonomous vehicles must navigate a highly complex world that includes roads, street signs, pedestrians, other vehicles, buildings, and more.
  • Human beings are elusive. Self-driving cars not only need to understand the driver, they also need to be able to predict human behavior, which we know is relatively unpredictable.
  • Technology is too expensive. Self-driving cars must be equipped with relevant hardware (such as cameras, lidar systems and radars) to capture information about the outside world and help the car make decisions. But this hardware needs significant improvements to provide the level of detailed data that cars require. It's not very cost-effective either.
  • Training must be comprehensive. We need to train self-driving cars for a variety of possible situations (for example, extreme weather such as snow or fog); but it is very difficult to predict all the situations that the car may encounter.
  • There is no room for error. Autonomous vehicles directly affect the safety of drivers and passengers, which is a matter of life and death. Autonomous driving systems must be extremely accurate.

 

Data is key

To solve the above challenges, we need to start from their root causes. To do this, we need to understand how self-driving cars work. Self-driving cars rely on artificial intelligence (AI), specifically computer vision models, to enable the car to "see" the world around it and then make decisions based on what it sees. Data is captured via hardware on the car (cameras, lidar, radar and other types of sensor data, as mentioned earlier) and used as input to the model. For example, for a car to react to a pedestrian on the road requires that the car has previously seen sensor data representing this situation. In other words, the car needs to be trained with data that represents all possible scenarios and situations. Thinking about the experience of riding in a car, it is easy to understand that various situations will occur on the road, so a large amount of training data is needed. The pedestrian aspect alone would require including children and adults, people in wheelchairs, babies in strollers, and other unexpected scenario examples in the training data. For example, we also want self-driving models to be able to distinguish actual pedestrians from pictures of faces on sign boards. As you can see, seemingly simple use cases can quickly become complex. Not only do cars require large amounts of training data, this training data also needs to be accurately annotated. An AI model can't just look at an image of a pedestrian and understand what it's seeing; we also need clear labels indicating which parts of the image include pedestrians. Because of this complexity, self-driving car AI models need to be fed many different types of annotated data:

  • Lidar and radar data point cloud annotation:Identify and track objects in the scene
  • 2DAnnotation (including semantic segmentation of camera data): enables the model to understand the category to which each pixel belongs
  • Video object and event tracking: Helps the model understand how objects in the scene move over time
  • etc.

Data annotation leaves little room for error and no lack of critical use cases. Ultimately, collecting and annotating data for self-driving cars is a time-consuming and resource-intensive process, but many companies don’t fully realize this at the outset. This is the reason why autonomous driving has been delayed to market, has poor performance, and is still not widely deployed. However, these issues are also key puzzles that automakers need to solve in order to succeed.​  

Accuracy, variety and efficiency are key to safety

To learn more about the key considerations for autonomous vehicle data, we turned to Appen data scientist Xiaorui Yang, who specializes in computer vision research.

AccuracyAccurately sensing the surrounding environment and detecting and preventing hazards are crucial for autonomous vehicles to successfully complete transportation tasks. The data should be accurate enough so that the AI ​​model can learn from it, and only accurate inferences about the location of obstacles can make reasonable decisions. For example, if the model cannot accurately detect a truck moving horizontally in the nearest lane, it often results in incorrect braking, which greatly reduces the user experience.

DiversityScenario: The weather in the actual environment may be diverse: raining, snowing, foggy; different lighting conditions: sunny day, dark night, cloudy day before heavy rain, etc. . Self-driving cars should be able to handle all scenarios. Therefore, the training data should include both common and rare situations.

Various modes: The sensor behaves differently in different environments. For example, lidar performance degrades in rain or snow due to physical characteristics. Intuitively, the camera cannot see as far at night as it can during the day. Therefore, most companies still use multiple types of sensors to complement each other in difficult environmental sensing situations.

EfficiencyWhen companies experiment with self-driving cars in a new country or city, the efficiency of the data is critical to how the trial progresses. If the labeled training data is not ready on time, the risk of project delays increases. A good data partner should be able to provide timely data with the help of advanced perception models and save time for other time-consuming tasks.  

Guess you like

Origin blog.csdn.net/Appen_China/article/details/134421867