The future of driverless driving: how to arrive in the post-epidemic era

Autonomous driving—showing its skills during the epidemic crisis

In 2020, a sudden outbreak of COVID-19 ravaged the world, causing people in different countries to be forced to quarantine or maintain social distance, making human-to-human contact extremely risky. For a time, people were eagerly awaiting the emergence and application of "unmanned" technologies such as artificial intelligence robots and autonomous driving that can effectively solve the problem of human contact. During the epidemic, from non-contact temperature measurement to online office work, cloud meetings, online education, online press conferences, cloud shopping, cloud tourism, etc., artificial intelligence has shown its power in this battle. Models that did not exist before are constantly being created. Artificial intelligence The "enabling" role of intelligence in many industries has also begun to appear and is quickly gaining people's favor.

In particular, some material transportation and logistics parties have adopted unmanned vehicle delivery to avoid the problem of direct contact between people; some hospitals have adopted unmanned disinfection and cleaning vehicles; some have used unmanned vehicles for patients in hospitals. People delivering meals, etc. There are too many vivid examples like these in this epidemic.

According to statistics, there are more than a dozen prefecture-level cities in my country adopting autonomous driving this time, including Beijing, Shanghai and Guangzhou. At the same time, the amount of investment in this use has reached hundreds. Not only at home, but also abroad. France's NAVYA has cooperated with clinics in the United States and has also begun to deliver medicines or testing samples by unmanned vehicles.

The actual investment and use of "autonomous driving" in this epidemic is a small trial, but it is enough to give people a new understanding of it. From this, we see broad prospects and huge market potential in the field of autonomous driving. Autonomous driving has always been mankind's fantasy for the future, but there are still many challenges to achieve truly autonomous driving. How to quickly realize commercialization is an issue that many companies investing in autonomous driving are paying close attention to. In the past, more focus was on unmanned vehicles as a means of transportation, mainly used to solve the problem of transporting people; but now, it is more like it can become a substitute for simple repetitive work. assistant".

For the auto industry to create the most efficient and safest self-driving cars, automotive investors must adopt a consumer-first mindset to stay ahead. In fact, for most drivers, the current driving experience is still very poor. Hands-free systems originally used for navigation, communication and entertainment should minimize interference, but the current poor voice recognition function is often frustrating, but brings more unnecessary interference to the driver. While new self-driving and driver-assistance features have improved and can provide some help, they are far from ideal. Only those companies that are committed to improving the consumer experience will ultimately stand out from the competition, whether it is the interior experience of passengers or drivers in the car, or the exterior experience of the car that improves safety and autonomy.

At present, the epidemic has been controlled to a certain extent in China. The epidemic will eventually dissipate, but no one dares to predict whether the epidemic will come back again? So what will driving look like in the future? When will we reach the future?

The future of autonomous driving is more than just technology—issues and challenges

While AI technology is maturing rapidly, the development of AI involves more than technology, including regulatory, business and product challenges, social acceptance and the development of new technologies. As far as autonomous driving is concerned, it mainly covers several aspects: complexity, safety, localization and retraining.

First, take the example of transporting children to and from school.

It’s not just the technology involved in getting from point A to point B, but who is responsible for the safety of the bus? Is it the government, the bus manufacturer, the AI ​​software engineers, or a combination of them? What should you do if something goes wrong? How to monitor children's behavior during the trip and how to handle the transfer of responsibilities from bus to school? I'm afraid these all involve legislation, regulations and the insurance industry's comprehensive investment in order to obtain good and appropriate solutions one by one.

Second, vendors must figure out how to collect and process large amounts of data to support thousands of self-driving cars interacting simultaneously.

Before being put into production, it must also be able to prove that the product is safe and reliable enough, and can withstand malicious attacks on the network. Finally, they must develop a business model that supports solution scaling. Not everyone is interested in self-driving cars. Therefore, we may encounter strong resistance or even forceful resistance from some people around us who are conservative in thinking and behavior. In other words, the first thing that autonomous driving must solve is concept and awareness, which is the word trust. In fact, whenever a major new technology emerges, we will face these problems. To some extent, how we manage these issues of autonomous and self-driving cars will affect how people accept this drastic social change and how it will assess its impact on the progress of social development in the long term. Pros or cons.

With the arrival of new technologies such as 5G+IoT+AI, everything in the physical world will be mapped to the digital world, and the field of driverless driving will soon enter the era of intelligent driving perception. In other words, the road will become intelligent. Deploying a large number of RSU (Road Side Unit) on the road, coupled with laser radar scanning processing, allows the car and the road to transmit information in real time, thereby greatly improving the accuracy of unmanned vehicle path planning and decision-making. With the commercial deployment of 5G and the introduction of other new technologies, it will inevitably bring more new opportunities and challenges to the autonomous vehicle industry and the technology routes chosen by governments.

Complexity: As with fundamentals, companies may not pay much attention to how complexity affects their projects. By finding a reliable data partner, their expertise can provide guidance and insights to the business. For example, the larger the ontology, the more complex the project. An experienced data partner will help you determine how this situation is causing you to invest more time and cost, and find a solution that aligns with your overall business goals, which is critical when considering images and videos.

Localization:Localization is especially important in the automotive industry. Because automotive companies need to consider multiple markets when designing their models, they need to consider different languages, cultures, and demographics to properly tailor the consumer experience. A localization project is ideal as your first collaboration with a data partner, who can leverage a team of experienced language experts to develop style guides and voice personas (formal, casual, etc.), as well as optimize across languages.

Security: Much of the data collected by the automotive industry contains sensitive data that requires additional security measures. An ideal data partner will not only offer a variety of security options, but also have strict security standards even at the most basic level to ensure your data is handled correctly. Look for a data partner that offers options like; secure data access (critical to PII and PHI); secure crowdsourcing and onsite service options; private cloud deployment; on-premises deployment; and SAML-based single sign-on, and more.

Retraining:McKinsey believes that 1/3 of live AI products require monthly updates to adapt to changing conditions, such as model drift or use case shifts. Many businesses skip this critical step or set it aside entirely. This way, however, the risk that the need to demonstrate ROI through AI projects deployed at scale and with sufficient long-term success will increasingly limit retraining on longer-term data is avoided. Retraining allows you to iterate on your model, making it more accurate and successful—ideally supported by leveraging a data partner to relabel the data and use human evaluators to analyze low-confidence predictions.

Smart cockpit powered by AI

Smart cockpits powered by AI have become synonymous with many corporate brands. Automakers are partnering or seeking collaboration with relevant ecosystem providers to create more value for customers. Smart cockpits powered by AI can bring many benefits, including improved driver experience and safety, as well as providing intuitive in-car assistants. This means AI can be adopted and deployed scalably with the help of training data to improve the in-car and out-of-car experience.

As competition in the field of fully autonomous vehicles intensifies, a standard has been established that defines six levels of autonomy and is designed to allow automakers, suppliers and policymakers to discuss and compare systems. These six levels of autonomous driving are tied to different consumer experiences, with significant changes occurring between Level 2 (L2) and Level 3 (L3). Transitioning from Level 2 to Level 3, the responsibility for monitoring the car shifts from the driver to the system. Because there are different levels of autonomous driving, focusing on the consumer experience can help you quickly achieve success in the areas of in-car and out-of-car experiences, making these experiences highly susceptible to the scalability of success.

The in-car experience is often described as an AI-powered cockpit that encompasses the entire user experience – including the driver and all passengers, aiming to create a smarter and more enjoyable overall in-car experience. It includes applying AI to intelligent driver assistance programs to improve safety or infotainment systems that not only navigate the driver but also recommend relevant services to rear-seat passengers.

When it comes to the out-of-car experience, although companies are doing their best to achieve level 5 autonomous driving, smart cars driven by AI still require higher levels of computer vision and computing power - radar and camera sensors transmit large amounts of data every second to Deal with situations such as hazardous road conditions, objects on the road, and road signs.

Thanks to the latest research in machine learning models for computer vision, opportunities in AI-powered autonomous driving focus on how to leverage LiDAR, video object tracking and sensor data to support computer vision. These technologies help cars “see” and “think” as they drive from point A to point B. Data annotation services that help train models to perform tasks include:

Point cloud labeling (LiDAR, radar)

Understand the scene in front of, behind and around your car by identifying and tracking objects in the scene. Merge point cloud data and video streams into a scene to be annotated. Point cloud data helps your model understand what's going on around your car.

2d/3d point cloud annotation technology applied in the field of intelligent driving

2D/3D point cloud annotation demonstration image

2D markup including semantic segmentation

Help your model better understand visible light camera input. Find a data partner that can help you create scalable bounding boxes or highly detailed pixel templates for your custom ontology.

Video object and event tracking

Your model must understand how objects move over time, and your data partner should assist you in labeling temporal events. Track objects in your body, such as other cars and pedestrians, as they enter and leave your area of ​​interest across multiple frames of video and LiDAR scenes. It is critical to maintain a consistent understanding of the object's identity throughout the video, regardless of how many times the object appears and disappears.

In the past, in order to effectively train AI models, enterprises had to rely on multiple vendors and applications to collect, prepare and integrate all the data. But things are different now. Whether you are building a Level 1 or Level 5 autonomous driving solution, improving driver assistance features, or something in between, a reliable collection and labeling partner can provide a unified offering to train and test your vehicles in one platform AI system.

The key to reaching the era of intelligent driving - the high-quality data behind it

Appen’s research and experience has found that to get AI pilot projects into the large-scale deployment stage that can bring tangible profits, enterprises should focus on one key goal. This is one of the simplest ways. . Most companies have found early success by building AI that has a positive impact on the consumer experience—whether it’s a passenger or driver sitting in a vehicle, or someone standing outside the vehicle, All can gain greater security and autonomy. Although we have made great progress in this area, driverless cars will not become widely available in the next few years, and we cannot do it overnight. Artificial intelligence is driving profound changes in the automotive industry. As the era of driverless driving becomes more and more realistic, artificial intelligence and automotive technology are becoming more and more closely intertwined. We already have all the basic technology needed for self-driving cars – and we even know how to do it. But this is very different from running an entire self-driving car system at scale.

For companies investing heavily in driverless technology and the future of connected cars often must leverage multiple vendors and applications to collect, annotate, prepare and Aggregate all data in order to efficiently train its AI models. Self-driving cars are complex machines powered by sophisticated machine learning algorithms. As the car moves forward, machine learning algorithm models process many types of data, just as a driver looks through a windshield or monitors what's going on inside and outside the car. In order for a car to have the ability to "see", "hear", "understand", "talk" and "think", video, image, audio, text, LiDAR and sensor data need to be collected and structured in an appropriate way processing and making it understandable by machine learning models. Cars need to give meaning to a large amount of images containing 2D/3D data, for example, identify trees or pedestrians, identify dynamic road conditions, listen to commands, understand external changes in the environment, and feed this information back to the car's AI to provide information support for decision-making. , and improve the algorithm to achieve level five autonomous driving. Likewise, smart driving - smart cockpit: With the development of speech recognition technology, LiDAR and cameras that can track driver emotions, the next important step in human-machine interface is to integrate these technologies to allow cars to recognize the speaker's emotions and words , thereby distinguishing whether the user is happy or frustrated and responding accordingly. Through this type of in-car public opinion monitoring, we can understand and predict behaviors to achieve excellent human-vehicle interaction.

For self-driving cars, as in healthcare or other scenarios where risk management is critical, in order to function in the fast-changing and complex real-world driving scenarios, the training data needs to be processed by humans at scale. Annotation and verification. Machine learning systems require large amounts of specially tuned training data from diverse driving environments. To create this kind of high-quality training data, you have to start with human labeling. For example, when training a computer vision solution, one needs to annotate and label the LiDAR data collected by the sensor, outlining all the pixels in the image that contain trees, traffic signs, etc. This way the system will learn to recognize these objects, but it requires a lot of examples. Fortunately, there are tools now on the market, including Appen’s machine learning-assisted LiDAR, video, event and pixel-level tagging, as well as speech and natural language, that can help us accelerate these tasks and meet the growing demand for Structured data requirements. Through the interconnection of these tools and workflows, we can help accelerate the development of autonomous driving capabilities, improve productivity, and become a market winner.

As competition in the autonomous vehicle market intensifies, high-quality training data at scale remains a major challenge that the automotive industry is grappling with. Add to that the fact that the cars not only need to comply with strict national and regional regulations, but also have to understand hundreds of languages ​​and dialects, and it creates a huge challenge. Obviously, we cannot avoid the biases and challenges involved. For example, a man whose native language is English drives a car manufactured in the U.S. market, and his speech recognition success rate is higher than that of a female driver whose native language is not English. In short, speech recognition systems that rely primarily on data collected and annotated from native English-speaking male voices are prone to problems when dealing with other voices. The same goes for visual data for accident avoidance and autonomous driving. If the training data is collected during the day when the weather is clear, the system will respond poorly at night when it is rainy.

Work with data partners to accelerate AI from experiments to production

When it comes to truly adopting a pilot model strategy and delivering ROI, many projects fail to deliver meaningful results. This can lead to corporate executives holding back, failing to impress the CIO, and leading to experiments being terminated because value cannot be realized. As a result, managers will have difficulty proving the value of the project and are often unwilling to invest in expanding future trials. To ensure that your AI experiments don't just look good, investing directly in training data instead of spending 80% of the time preparing training data will get twice the result with half the effort.

Many AI projects start by collecting immediately available data and then trying to figure out how to use that data. By adopting an appropriate approach to successfully extend your model beyond the project, you are able to avoid using conventional data (data collected from public sources and the web as well as dirty/dark data) and focus on collecting data that is relevant to tangible goals and use cases. specific data. To be successful, the data must be reliable, clean, and adequately annotated, and the team will be dedicated to maintaining the data and outsourcing more specialized work.

To launch a world-class AI program, you should look to a data partner to provide you with reliable, high-quality training data that allows you to scale through five key stages:

Experiments:Gives you reliable training data in large experiments to ensure your models can scale quickly. It can also help you label data with low confidence or label data in edge use case scenarios.

Data annotation: After small-scale experiments, large amounts of training data are often required. Massive data sets are used to train models in this case to ensure that the model can be adapted to every scenario, is free of bias, and performs as expected. Additionally, this data must be accurate, otherwise not only will your model not be trained correctly, urgent business problems will be delayed, and your stakeholders may not agree to scale your deployment. Enlist help from experts in data labeling and collection. Helping businesses significantly reduce the time spent acquiring data and ensuring the highest possible accuracy.

Testing and Validation:After training the model, you need to fine-tune the model by validating it with a set of data that was not used to train the model. During the validation phase, enterprises can better test whether the data is properly labeled with the correct intent and ensure that the model does not experience any bias or failure due to edge cases. This provides an unbiased estimate of the final skill of tuning the model.

Scale deployment to production: If the model has been successful in both the testing and validation phases, it is time to scale deployment. Organizations can further evaluate and validate low-confidence answers, but regardless, organizations should feel confident in expanding experiments.

Retraining: Scaling completed successfully - but how accurately will your model perform when fully deployed? Regularly retraining models is critical to avoid model drift and address use case transitions.

The future of transportation will be based on world-class AI, ultra-fast connectivity and environmental impact. Therefore, the range of potential use cases for AI is very wide. And, while enterprise AI and machine learning use cases are becoming increasingly diverse (from supply chain and manufacturing to self-driving cars and mobility-as-a-service), consumer experience-focused applications remain the most common and successful Applications deployed at scale. This is because both in-car and out-of-car experiences are directly linked to clear KPIs, and many automotive companies have a wealth of untapped data that they can leverage to improve those experiences.

Therefore, ensuring sufficient unbiased training data for multimodal and multimedia visual and speech recognition systems requires a large number of annotators representing different geographies, cultures, genders, and languages. All this data must be annotated and collected by experts in the field and used to quickly and efficiently train and improve machine learning models at scale. Appen is an industry expert with more than 15 years of experience in the field of autonomous vehicles. It enjoys rich cooperation experience with the world's top ten automakers and profound industry insights, providing commercial solutions such as autonomous driving and smart cockpits. The scene training data provides multi-sensor fusion LiDar point cloud data annotation, PLSS, computer vision machine learning auxiliary annotation tools, and in-car data collection, covering more than 180 languages ​​​​around the world.

Guess you like

Origin blog.csdn.net/Appen_China/article/details/134421963