Where is the way for multi-sensors? Millimeter wave radar and visual fusion perception full stack tutorial!

Why do sensor fusion?

Autonomous driving is one of the hottest topics in today's technology industry. With the continuous advancement of technology, autonomous vehicles will gradually become mainstream, and multi-sensor fusion technology is the key to the realization of autonomous driving technology. There are certain limitations in the application of a single sensor. For example, when the weather is bad or the environment is complex, the effect of the sensor will be greatly affected. In order to overcome these limitations, multi-sensor fusion technology came into being! Multi-sensor fusion technology integrates and processes the data of multiple sensors to form a more comprehensive and accurate perception result.
67e52f9132c4bf28c633cec9edbfe715.png

In recent years, the multi-sensor fusion scheme has gradually attracted attention in various top conferences, and with the proposal of the BEV perception method, the fusion algorithm has been given a higher degree of freedom, and it has naturally become a hot spot in the research of perception algorithms. one. In the industry, multi-sensor fusion technology has been widely used. Companies such as Uber and Waymo have also adopted multi-sensor fusion technology in autonomous driving technology. With the further development of future technology, multi-sensor fusion technology will be more widely used. The application of autonomous driving has become an important driving force for autonomous driving technology.

0ae70198eb9eb79fdaee548977d32a98.png

Millimeter-wave radar and camera fusion is a very popular method in sensor fusion solutions, and it is also the most implemented solution at present! Cameras are easily affected by poor lighting or harsh weather conditions, and detecting 3D objects with cameras alone cannot provide accurate distance information. Millimeter-wave radar can provide accurate distance information, speed information, and other information and can operate around the clock, but sparsity and noise prevent radar from being the main source of information. In general, the camera has good resolution and classification capabilities, and the radar has accurate target distance and speed measurement capabilities and robustness to severe weather. By fusing radar and camera sensors, the complementary relationship between the two can be fully utilized, so that in Improve overall perception.

Radar and camera fusion has a variety of fusion methods. On the one hand, the pre-fusion method popular in academia, with the introduction of the BEV method, has brought new opportunities for fusion and the direction of volume papers, and has very broad research prospects, but the understanding of radar principles and traditional detection methods will limit our further optimization. On the other hand, the post-fusion method commonly used in the industry currently has fewer resources, and at the same time, it is quite different from the content of the pre-fusion field and is not easy to get started. A course that can combine pre- and post-fusion content is very important for everyone to get started with multi-sensor fusion! It can help many beginners get started quickly and avoid detours. After an in-depth investigation of the common methods of millimeter-wave visual fusion, the Heart of Autonomous Driving team explained from multiple directions such as data processing, clustering, tracking and matching, deep learning point cloud solutions, and 2D/3D fusion. and tutorials from academia!

Scan the code to get the coupon! Take lessons together!

dc70b45ffa4a758656702a29b935f6dd.png

Course Outline

a56614a60f6041d737fcc046b1fefc89.jpeg

main lecturer

Naca, a member of the Heart of Autopilot team, focuses on target detection and scene perception of millimeter-wave radar and camera fusion!

This course is suitable for everyone

  1. Bachelor/Master degree in computer vision and autonomous driving perception related research direction;

  2. 2D/3D perception related algorithm engineers for autonomous driving;

  3. Students who want to transfer to the direction of multi-sensor fusion detection;

required foundation

  1. Have a certain foundation of Python, PyTorch, C++;

  2. Have a certain understanding of BEV perception algorithms, traditional filtering and optimization algorithms;

  3. The computer needs to have its own GPU (at least 10GB of video memory)

Harvest

  1. In-depth understanding of traditional millimeter wave algorithms and data processing;

  2. Proficiency in deep learning methods of common millimeter-wave radar;

  3. Have a deep understanding of millimeter wave vision 2D fusion and 3D fusion solutions, and are familiar with design methods and ideas;

  4. Have an in-depth understanding of multi-sensor fusion tasks, be able to master common design methods in the industry, and build your own projects;

start time

The class will officially start on June 5, 2023. It will last for two months. The whole process will be recorded and broadcast.

Welcome to consult

Scan the code to get the coupon! Take lessons together!

40c49b7d3ce8b259fcdf268dccc71d85.png

Scan the QR code to add the Autobot Assistant Consulting Course!

(WeChat: AIDriver004)

ab72f0717fe40dfa9f7cb83d6da8f85f.jpeg

Guess you like

Origin blog.csdn.net/CV_Autobot/article/details/131356282