One hundred questions about lane detection!

Editor | Autopilot and AI

Click the card below to pay attention to the " Automatic Driving Heart " public account

ADAS Jumbo dry goods, you can get it

Click to enter→ The heart of automatic driving [lane line detection] technical exchange group

This article is only for academic sharing, if there is any infringement, please contact to delete the article

Recently, many small partners have shown great interest in the content related to lane line detection, and have also asked us a lot of questions. In order to facilitate everyone's learning, the heart of automatic driving has been sorted out here, tentatively 100 questions, and will continue to be updated later. It is mainly the most frequently encountered problems in academia and industry, and I hope to help everyone!

All questions and answers come from: Knowledge Planet of the Heart of Autonomous Driving (the first autonomous driving technology exchange community in China)

1. Excuse me, what should I do if there are some old lane lines on the road that affect the new lane lines during lane line detection?

Answer: This kind of dependency model is not easy to solve. It can only be solved by lane post-processing. You can try to add tracking or look at the lane line distribution rules to constrain it;

2. Ladies and gentlemen, what should I do if the lane line is at the diverging point and the merging point will have a few frames of errors?

A: Is there any tracking? Or do some logic stabilization for merging and diverging, such as detecting (or calculating) the merging point, and then doing some stabilization according to the position of the point;

3. I would like to ask everyone, in the self-driving vehicle, the impact of the car's empty and full load on the external parameters of the front-view camera (mainly the pitch angle) will affect the lane line detection or centering function, right? If there is an impact, should we start from the perspective of calibration or the perspective of lane line detection?

Answer: You can measure how much the influence is. In fact, by adding a vanishing point estimate, the pitch angle can also be corrected;

4. I would like to ask everyone, in the detection of lane lines, some people will have a center line in the middle of the car. How is this done?

Answer: You can post a picture to see. I understand that the center line in the middle of the car is related to the position of the car and the main lane lines on both sides, which can be fitted by them;

5. Excuse me, the lane line data set Curvelanes does not have lane line categories, but my own data set has categories. How can I use these two data sets together?

Answer: It is also possible to use your own categorical data to train a classification model to make a category pseudo-label for Curvelanes, and then use some semi-supervised strategies together. Or don't care about the category first, use two data sets to train a model without category output, and then use your own data set to fine-tune a model with categories;

6. Has anyone done lane tracking, but can't find relevant papers, and what method is generally used to do it now?

Answer: If the positioning information is accurate, the lane lines of multiple frames can completely overlap in the world coordinate system. Using this principle to match the current frame with the previous frames can achieve good tracking, and then perform filtering, tracking and matching prediction ;

7. Guys, how does the semantic segmentation of lane lines solve the occlusion situation?

Answer: You can consider multi-camera fusion with long and short focus; add data, and create some occlusion data through GAN and other methods.

8. Hello, I use the image to detect the lane lines, and then extract the pixels. I would like to ask, is there any good way to convert these pixels to the lidar coordinate system (because I want to convert to the map coordinate system in the end)? It is a monocular camera without depth information. I used the camera lidar calibration parameters, the converted results are as follows, not quite right, is there any solution?

Answer: Because there is a one-to-many relationship in the conversion of the monocular camera to the three-dimensional space, it is necessary to scan the environment containing the lane with the laser radar, and project the lane points to the image plane to establish a one-to-one correspondence, and then according to the picture The lane line point finds its corresponding laser point cloud position, which is the lane line position in three-dimensional space; the conversion of pixel coordinates to 3D coordinates must require depth information. Since you do not have depth information, it must not work. But if you have the internal and external parameters of the camera, you can try to project 2D pixels onto the ground through IPM. Relatively speaking, the depth is relatively accurate, but the ground height information is gone;

9. Guys, I would like to ask you what are the useful networks for detecting lane lines in embedded devices with low computing power?

Answer: You can look at the ultra fast series

10. Guys, I would like to ask, do you have a better method for non-BEV solutions to solve the lane line detection problem of Zhoushi?

Answer: Lane lines pay more attention to the front view, so they are basically based on monocular. In the local map construction, the BEV solution can be given priority. Otherwise, the image will be spliced, or the result will be very inelegant; if it must be done, it is There are several types of monocular lane line detection. The anchor-based implementation is convenient, and the segmentation-based method may take more time and require more post-processing. Specific methods can try;

11. I would like to ask you guys, how are the fitting and tracking of lane line detection results generally done? How to fit multiple curves in the case of multiple lanes?

Answer: For lane line detection, to obtain lane line instances, there are anchor-based outputs that are lane line instances, and there are also binary segmentation and then clustering, or key point clustering or special decoding algorithms. I understand it for tracking. Not many, it is recommended to find related papers, if you have seen it;

12. Ladies and gentlemen, is there a model with a better detection effect on lane curves and lane changes? I have used UFLD and UFLDv2, and I feel that this problem has not been greatly improved.

Answer: This is a previous open source project and paper, you can refer to: Rethinking Efficient Lane Detection via Curve Modeling

13. How to compensate for the disappearance of lane lines in extreme weather? Do you have any good solutions?

Answer: In terms of visual perception, if the lane lines do not disappear in consecutive frames, timing fusion can be considered, and both models and post-processing can be used. If it is a continuous frame, it should be failsafe from a functional point of view. If there is a lidar, a good lidar can theoretically extract lane lines directly from the point cloud. The strength of the lane lines and the ground is different, and the impact will be smaller than relying on vision.

14. Big brothers, if you want to do lane segmentation and road marking (turn left, turn right, straight arrow, etc.) detection, is there any more suitable data enhancement method?

Answer: Pavement signs can do some disturbances in the target frame, such as horizontal and vertical flip, rotation, etc., as well as deformation operations (affine transformation, etc.), as long as they are enhanced and look reasonable.

15. What is the current good solution (engineering) for the fusion of camera and high-precision map lane lines? The visual detection results of lane lines are available!

Answer: 1. The visual result is the relative position of the vehicle, and the high-precision map is the absolute position. After matching the two lane lines, you should be able to use the Kalman filter to fuse and combine the position information to get the final result. This idea can be tried 2. I remember that Apollo should have lane line perception and high-precision map fusion content, you can also refer to it~

① Exclusive video courses on the whole network

BEV perception, millimeter-wave radar vision fusion, multi-sensor calibration, multi-sensor fusion, multi-modal 3D object detection, point cloud 3D object detection, object tracking, Occupancy, cuda and TensorRT model deployment, collaborative perception, semantic segmentation, autonomous driving simulation , sensor deployment, decision planning, trajectory prediction and other learning videos (scan code learning)

6c6b6d949b3573193c5ec05768d09ff5.png Video official website: www.zdjszx.com

② The first autonomous driving learning community in China

A communication community of nearly 2,000 people, involving 30+ autonomous driving technology stack learning routes, who want to learn more about autonomous driving perception (2D detection, segmentation, 2D/3D lane lines, BEV perception, 3D object detection, Occupancy, multi-sensor fusion, Multi-sensor calibration, target tracking, optical flow estimation), automatic driving positioning and mapping (SLAM, high-precision map, local online map), automatic driving planning control/trajectory prediction and other technical solutions, AI model deployment in actual combat, industry trends, Job announcement, welcome to scan the QR code below to join the knowledge planet of the heart of autonomous driving, this is a place with real dry goods, communicate with the field leaders about various problems in getting started, studying, working, and job-hopping, and share papers + codes daily +Video , looking forward to communication!

b80aee21e19ee74d4cbc7a341266204d.png

③【Heart of Autopilot】Technical exchange group

The Heart of Autopilot is the first autopilot developer community, focusing on object detection, semantic segmentation, panoramic segmentation, instance segmentation, key point detection, lane lines, object tracking, 3D object detection, BEV perception, multi-modal perception, Occupancy, Multi-sensor fusion, transformer, large model, point cloud processing, end-to-end automatic driving, SLAM, optical flow estimation, depth estimation, trajectory prediction, high-precision map, NeRF, planning control, model deployment, automatic driving simulation test, products Manager, hardware configuration, AI job search and communication , etc. Scan the QR code to add Autobot Assistant WeChat to invite to join the group, note: school/company + direction + nickname (quick way to join the group)

8183bee1735e2e42507b4e30bf23f5a8.jpeg

④【Automatic Driving Heart】Platform matrix, welcome to contact us!

491cac6e7b5c7e82a781fc6dc400f695.jpeg

Guess you like

Origin blog.csdn.net/CV_Autobot/article/details/132158220