Autonomous driving simulation science article 2: Where is the difficulty in sensor simulation?

Exchange group |  Enter "sensor group/skateboard chassis group", please add WeChat ID: xsh041388

Exchange group |  Enter the "Automotive Basic Software Group", please add WeChat ID: ckc1087

Remarks: group name  + real name, company, position

This article is the second article in the popular science series of automatic driving simulation. The previous article is "One of the popular science articles on automatic driving: scene source, scene generalization and extraction "

In the process of communicating with experts from many simulation companies and their downstream users, we learned that one of the most difficult aspects of autonomous driving simulation is sensor modeling.

According to Li Yue, CTO of Zhixing Zhongwei, sensor modeling can be divided into functional information level modeling, phenomenon information level/statistical information level modeling and full physical level modeling. The difference between these concepts is as follows -

  • Functional information-level modeling simply describes the specific functions of the camera output image and millimeter-wave radar detecting targets within a certain range. The main purpose is to test and verify the perception algorithm, but it does not pay attention to the performance of the sensor itself;

  • Phenomena information and statistical information level modeling is a hybrid, intermediate level modeling that includes part functional information level modeling and part physical level modeling;

  • Full-physics-level modeling refers to the simulation of the entire physical link of the sensor's work. The goal is to test the physical performance of the sensor itself, such as the filtering ability of the millimeter-wave radar.

Sensor modeling in a narrow sense refers to modeling at the full physical level. This kind of modeling, few companies can do well, the specific reasons are as follows:

1. The efficiency of image rendering is not high enough

From the perspective of computer graphics imaging principles, sensor simulation includes light (input and output simulation), geometry, material simulation, image rendering and other simulations, and the difference in rendering capabilities and efficiency will affect the authenticity of the simulation.

2. Too many types of sensors & the "impossible triangle" of model accuracy, efficiency and versatility

It is not enough to have a single sensor with high accuracy, you also need all the sensors to reach an ideal state at the same time, which requires a wide coverage of modeling, but under the pressure of cost, it is obviously impossible for the simulation team Radar does 10 or 20 versions of modeling, right? On the other hand, it is difficult to use a general model to express various sensors of different styles.

The accuracy, efficiency, and versatility of the model are an "impossible triangle" relationship. You can improve one side or two corners, but it is difficult for you to continuously improve the three dimensions at the same time. When the efficiency is high enough, the model accuracy must decrease.

The simulation expert of Cheyou Intelligence said:

"No matter how complicated the mathematical model is, it may only simulate the real sensor with 99% similarity, and the remaining 1% may be the factor that will cause fatal problems."

3. Sensor modeling is subject to the parameters of the target

Sensor simulation requires external data, that is, the external environment data is strongly coupled with the sensor. However, the modeling of the external environment is actually quite complicated and the cost is not low.

There are too many buildings in urban scenes, which will seriously consume computing resources for image rendering. Some buildings will block the traffic flow, pedestrians and other target objects on the road, but if they are blocked or not, the amount of calculation is completely different.

In addition, the reflectivity and material of the target are difficult to figure out through sensor modeling. For example, it can be said that a target is in the shape of a barrel, but it is difficult to express clearly through modeling whether it is an iron barrel or a plastic barrel; even if it can be expressed clearly, it is another problem to adjust these parameters in the simulation model. Super big project.

If the physical information such as the material of the target object is not clear, it is difficult to choose a simulator for simulation.

4. It is difficult to determine how much noise is added to the sensor

A Tier 1 simulation engineer said:

"Recognition of objects by deep learning algorithms is a process from the collection of real-world sensor data to signal denoising. In contrast, sensor modeling is to reasonably add noise on the basis of an ideal physical model, and the difficulty lies in It is how the noise can be added to be close enough to the real world, so that it can be recognized by the deep learning model and effectively improve the generalization of model recognition.”

The implication is that the sensor signal generated by the simulation must be "sufficiently similar" to the sensor signal in the real world (can identify the corresponding object), but not "too similar" (simulating the corner case allows the perception model to achieve recognition in more situations— — generalization). However, the problem is that in the real world, sensor noise is random in many cases, which means that how to simulate these noises in the simulation system is a big challenge.

From the perspective of the sensor principle, the process of camera modeling also needs to do camera blurring (first generate an ideal model, and then add noise), distortion simulation, vignetting simulation, color conversion, fisheye effect processing, etc. The model can also be divided into an ideal point cloud model (the steps include scene clipping, visibility judgment, occlusion judgment and position calculation), power attenuation model (including acceptance laser power, reflected laser power, reflection antenna gain, target scattering cross section, interface aperture, Target distance, atmospheric transmission coefficient, optical transmission coefficient, etc.) and physical models considering weather noise, etc.

5. Resource constraints

An Hongwei, CEO of Zhixing Zhongwei, mentioned the limitations of resources on perception of virtual simulation:

"We need to do a complete physical level modeling of the sensor, such as the optical and physical parameters of the camera, etc., and we also need to know the material, reflectivity and other data of the target (sensing object). In the case of manpower, the construction period of a one-kilometer scene takes about one month. Even if it can be built, the complexity of the model is extremely high, and it is difficult to run it on the current physical machine (too much computing power). "

"In the future, all simulations will go to the cloud. It seems that the computing power of the cloud is 'infinite', but when it is allocated to a single model of a single node, the computing power of the cloud may not be as good as that of a physical machine—and, in When doing simulation on a physical machine, if the computing resources of one machine are not enough, three machines can be installed, one is responsible for the sensor model, one is responsible for dynamics, and one is responsible for regulation and control, but running simulation on the cloud can be used in a single scene The computing power on a single model is not endless, so this limits the complexity of our model."

6. It is difficult for simulation companies to obtain the underlying data of sensors

Full physical level modeling needs to construct various performances of sensors with mathematical models. For example, a specific performance of the signal receiver, the propagation path (influenced by air in the middle, the entire link of reflection and refraction) is expressed in mathematical formulas. However, at the stage when software and hardware have not been truly decoupled, the perception algorithm inside the sensor is a black box, and the simulation company cannot understand what the algorithm looks like.

Full physical modeling needs to obtain the underlying parameters of sensor components (such as CMOS chips, ISPs), and model these parameters. Moreover, it is also necessary to know the underlying physical principles of sensors, and to analyze the laser waves of lidar and millimeter wave radars. Modeling of electromagnetic waves.

In this regard, a simulation expert said:

"To do a good job in sensor modeling, you must have a deep understanding of the underlying hardware knowledge of the sensor, which is basically equivalent to knowing how to design a sensor."

However, sensor vendors are generally reluctant to open up the underlying data.

Li Yue, CTO of Zhixing Zhongwei, said:

"If you get these underlying parameters and use them for modeling, then you can basically make this sensor."

An Hongwei, CEO of Zhixing Zhongwei, said:

"Usually when OEMs deal with sensor suppliers, it is not easy to get the interface protocol, not to mention the details of material physical parameters. If OEMs are strong enough and sensor suppliers actively cooperate, they can Get the interface agreement, but not all of them. It is even difficult for OEMs to obtain things, and it is even more difficult for simulation companies.”

In fact, the physical level simulation of sensors can only be done by sensor manufacturers themselves. Many domestic sensor manufacturers use external chips and other components for integration. Therefore, it is actually upstream suppliers such as TI and NXP that can simulate the physical level of sensors.

A simulation engineer of a commercial vehicle driverless company said:

"Simulation of sensors is difficult to do, which makes the process of sensor selection very complicated. When we want to select sensors, basically the sensor company will send me the samples first, and then we will install various types on the car. Testing. If sensor manufacturers can cooperate with simulation companies, they can connect all interfaces and provide accurate sensor modeling, then we can obtain sensor information at a very low cost and do the workload of sensor selection will be greatly reduced.”

However, 51 World CTO Bao Shiqiang said:

"Perceptual simulation is still in its infancy, and it is far from reaching the stage where the modeling inside the sensor needs to be done so finely. I think it is meaningless to disassemble the inside of the sensor and model those things."

In addition, according to the person in charge of the simulation of an unmanned driving company, the inability to do sensor simulation does not mean that the simulation of perception cannot be done at all.

For example, hardware-in-the-loop (HIL) can be connected to real sensors (sensors and domain controllers, both are real) for testing. Connecting to the real sensor can not only test the perception algorithm, but also test the function and performance of the sensor itself. In this mode, the sensor is real, and the simulation accuracy is higher than sensor simulation. 

However, because it involves supporting hardware, it is complicated to integrate, and this method still requires a sensor model to control the generation of environmental signals, and the cost is higher. Therefore, this method is rarely used in practice.

Attachment: Two stages of autonomous driving simulation test

b976b1f26f7e4c21d9719724764b3a06.gif

(Excerpt from the article "Introduction to Virtual Simulation Test of Autonomous Driving" published by the official account "Car Road Slowly" on March 26, 2021)

Considering the recent actual situation, autonomous driving simulation can be roughly divided into two stages of development (of course, these two stages may not have a clear time limit).

(1) Stage 1:

The perception and identification module of the sensor is tested in the laboratory and the closed test field, and the decision-making control module is tested in the virtual simulation environment. The simulation environment directly provides the target list to the decision-making control module.

This is mainly because current modeling of sensors has many limitations that prevent efficient (or even correct) simulations. For example, the pictures output by the camera are easier to simulate, but the simulation of characteristics such as stains and strong light is more difficult; and for millimeter-wave radar, if a model with high accuracy is established, the calculation speed is slow, which cannot meet the needs of simulation testing.

Complete control and data recording of the test environment can be carried out in the laboratory and closed test field. For example, arrange pedestrians and vehicles of different categories, positions and speeds, and even simulate environmental elements such as rain, snow, fog and strong light, and compare the target list output by sensor processing with the real environment, so as to give the perception recognition Module evaluation results and recommendations for improvement.

The advantage of this is that, in the case of many limitations in sensor modeling, the decision-making control module can still be tested in a simulation environment, and enjoy the advantages of simulation testing in advance.

(2) Stage two:

Perform high-precision sensor modeling in a virtual simulation environment to test complete autonomous driving algorithms.

In this way, not only can testing be performed in the same environment, thereby improving test efficiency, test scenario coverage and complexity; but also end-to-end testing can be performed on some AI-based algorithms.

The difficulty at this stage is, on the one hand, the sensor modeling that meets the test requirements mentioned above, and on the other hand, the interfaces for direct interaction between different sensor manufacturers and OEM manufacturers may be inconsistent (in some cases, they may not exist).

Recommended reading

" [Looking at Suzhou] Suzhou High-speed Railway New City has a "digital twin" brother! Help intelligent driving to run faster

" Li Yue: Simulation Empowerment, Data Driven, X-In-Loop® Technology System Promotes Safe Implementation of Intelligent Driving "

write at the end

About Contribution

If you are interested in contributing to "Nine Chapters Smart Driving" ("knowledge accumulation and sorting" type articles), please scan the QR code on the right and add staff WeChat.

3ef44419ca3c885afae585561f6603ab.jpeg

Note: Be sure to note your real name, company, and current position when adding WeChat

And the information about the position of interest, thank you!


Quality requirements for "knowledge accumulation" manuscripts:

A: The information density is higher than most reports of most brokerages, and not lower than the average level of "Nine Chapters Smart Driving";

B: Information needs to be highly scarce, and more than 80% of the information needs to be invisible on other media. If it is based on public information, it needs to have a particularly powerful and exclusive point of view. Thank you for your understanding and support.

Recommended reading:

Nine Chapters - A Collection of Articles in 2021

When a candidate says “I am optimistic about the prospects of the autonomous driving industry”, I will be wary——Review of the first anniversary of Jiuzhang Zhijia’s entrepreneurship (Part 1)

If the data is not collected enough and the algorithm is not iterated fast enough, then "no one likes me"———Jiuzhang Zhijia's first anniversary review (Part 2)

"Real-time" and its influencing factors in vehicle control

◆Lidar: the war between 905 and 1550

Analysis on the development trend of "cabin-driving fusion" technology

◆Uncover the commercialization status, challenges and trends of autonomous driving in airport scenarios

Guess you like

Origin blog.csdn.net/jiuzhang_0402/article/details/128337549