PriorLane: Pure transformer lane line detection method

Original link: [2209.06994] PriorLane: A Prior Knowledge Enhanced Lane Detection Approach Based on Transformer (arxiv.org)

This paper has been accepted by ICRA 2023 as the first pure tansformer-based lane line detection method.

 The main contributions of the article include the following points:

  1. Proposed a new paradigm of lane line detection based on transformer
  2. Proposed a new scene data set for lane line detection Zjlab
  3. The Knowledge Embedding Alignment (KEA) module is proposed, which utilizes prior knowledge related to the environment and achieves good results.

The most eye-catching aspect of this method is that the detection of lane lines is not limited to lane lines in ordinary scenes. It can also realize the recognition and detection of horizontal lane lines and other special similar signs.



1. Overall network framework

 The backbone network uses Mit block, the largest version of Segformer. Feature extraction is performed by stacking blocks and feature maps at multiple scales will also be used later. Another thing worthy of attention is the use of this prior knowledge (Figure 2). This is mainly to detect the data set proposed by itself. Once the resolution, sensing range and location are given in the article, you can start from the "big image" The sub-image obtained is called "prior knowledge". The input to the encoder is obtained by adding features and prior information.

Figure 2 Prior knowledge image

 If you want to detect other lane line data sets such as CUlane and Tusimple, you need to use MiT-lane, which is slightly modified based on this network. Although the adjustment is relatively simple, it has achieved certain results, proving the effectiveness of this transformer-based method.

What is more concerning is the embedding of this prior knowledge. The author proposed a new module-Knowledge Embedding Alignment (KEA). The KEA module is used to spatially align prior knowledge and feature maps to facilitate subsequent processing.

 Experimental results

 

 

 However, the ZJlab data has not been made public, so I’ll just take a look at this method for now, and I can’t play with it too much.

Guess you like

Origin blog.csdn.net/zhaodongdz/article/details/131593017