200,000 frames, more than 880,000 instance-level lanes, 14 lane categories... Lane recognition in complex scenes can use this dataset

Recently, many companies have made frequent moves in intelligent driving, which is enough to get a glimpse of the hotness of the intelligent driving track. 

● Xiaomi announced 16 patent information in June, involving multiple intelligent driving technology solutions such as object detection, lane recognition, and semantic segmentation, and 7 of them have entered the substantive review stage;

● Baidu Apollo's "completely unmanned driving" fleet drove into the streets of Guangzhou to conduct autonomous driving tests on public roads;

● The new ideal L9 model was released, claiming to be equipped with L4-level intelligent driving hardware equipment, supporting scenarios such as automatic parking, urban intelligent driving, and remote calling of vehicles...

16 patent information announced by Xiaomi in June (click to enlarge to view, source: Reference [1])

Table of contents

1. Automatic driving system becomes the standard configuration of new models

2. How does the autonomous driving system work?

3. Large-scale real scene 3D lane data set

4. OpenLane dataset details


1. Automatic driving system becomes the standard configuration of new models

The traditional auto market is in a red sea. New car manufacturers have stirred up the entire industry with new technologies and new thinking. Even mobile phone manufacturers have to get a piece of the action. High fuel prices have also discouraged consumers from fuel vehicles. Traditional auto companies How can we use the word "difficult", so innovation and transformation are imperative.

At a time when the country is vigorously advocating energy conservation and emission reduction, the development of new energy, intelligent manufacturing, and artificial intelligence, riding on this "Dongfeng" smart electric vehicle, which integrates the advantages of "time, location, and harmony", has high hopes and countless People expect it to bring a new round of innovation to the auto industry just like the Internet subverts traditional industries.

Among them, the intelligent assisted driving system, as the highlight configuration of intelligent electric vehicles, has become the standard content of the new generation of models.

2. How does the autonomous driving system work?

According to the degree of cooperation between human and vehicle, the automatic driving system is divided into 6 levels.

Below L4, those that require the driver’s participation are classified as assisted driving; at L4 and above, the vehicle can be independent of the driver’s operation, and can drive autonomously in specific or all scenarios, which is considered to be truly unmanned.

Automatic driving system classification (source: network)

At present, many domestic models have reached the L2 level, realizing "partially autonomous driving" functions, such as adaptive cruise, lane keeping, lane departure reminder, automatic brake assistance and automatic parking. Data shows that in 2021, 22.2% of full-caliber passenger cars will be equipped with L2-level autonomous driving systems. [2]

3. Large-scale real scene 3D lane data set

It has become mainstream to equip new models with intelligent assisted driving systems, but there are still many areas for improvement in the actual application of the system. For example, in many assisted driving scenarios, when going uphill, downhill, turning, or bumping, there will be problems with lane keeping instability.

In order to improve the accuracy of lane recognition in complex environments, a research team from Shanghai Artificial Laboratory, Shangtang Research Institute, and Shanghai Jiaotong University analyzed the shortcomings of existing lane detection methods: the traditional monocular 2D lane detection scheme is Poor performance in tracking planning and control tasks for driving; 3D lane detection schemes are too simplistic in design for spatial transformation between front view and bird's-eye view (BEV), and lack real data, which is not applicable in complex scenes.

In response to these problems, the team proposed PersFormer (Perspective Transformer): an end-to-end monocular 3D lane detector, which has a Transformer-based spatial feature conversion module. The model takes camera parameters as reference and generates BEV features by focusing on relevant front-view local regions. PersFormer adopts a unified 2D/3D anchor point design and adds an auxiliary task to detect 2D/3D lanes at the same time, and shares features between multiple tasks to enhance the consistency of features. [3]

At the same time, the industry's first large-scale real scene 3D lane dataset - OpenLane - was released in the paper , with high-quality annotations and scene diversity. This dataset is built based on Waymo Open Dataset, a mainstream dataset in the field of autonomous driving. [4]

OpenLane contains 200,000 frames, over 880,000 instance-level lanes, 14 lane categories (single white dashed line, double yellow solid, left/right curb, etc.), as well as scene labels and route proximity object (CIPO) annotations to encourage development 3D lane detection and more industry-relevant approaches to autonomous driving.

Comparison of OpenLane with existing benchmarks. "Avg.Length" indicates the average duration of the segment; "Inst.Anno." indicates whether the lane is annotated per instance (annotated by cf semantics); "Track.Anno." indicates whether the lane has a unique track ID; "#Frames The numbers in " are the number of annotated frames in the total frame; "Line Category" indicates the lane category; "Scenario" indicates the scene label.

Paper address:

https://arxiv.org/pdf/2203.11089.pdf

project address:

‍https://github.com/OpenPerceptionX/OpenLane

4. OpenLane dataset details

Publisher: Shanghai Artificial Intelligence Laboratory

Data format: Video

Data size: 132.7GB

Release date: 2022

download link:

https://opendatalab.com/OpenLane

Callout type:

● Lane shape. Each 2D/3D lane is displayed as a set of 2D/3D points.

● Lane category. Each lane has a category, such as double yellow lines or curbs.

● Lane attributes. Some lanes have attributes like right, left, etc.

● Lane tracking ID. With the exception of curbs, each lane has a unique ID.

● Stop lines and curbs.

(For more annotation criteria, please refer to Lane Anno criteria)

CIPO/Scene Notes:

● 2D bounding box. Its category indicates the object's level of importance.

● Scene markers. It describes in which scene this frame was collected. 

● Weather tab. It describes in what weather this frame was collected. 

● Hour markers. It annotates the time this frame was collected.

(See CIPO Anno Standards for more annotation standards)

Dataset visualization:

‍OpenLane annotation example (source: reference [5])

An overview of OpenLane samples, covering various scenes such as night, daylight, and curves (Source: Reference [5])

This data set has been launched on the official website of OpenDataLab ( https://opendatalab.com/ ), and it is currently among the top ten most downloaded data sets recently, and is very popular among AI engineers. Click the link to view (https://opendatalab.com/OpenLane)。‍

References

[1]https://www.qcc.com/cassets/29c65382c4c909774939722c3ab07f9f.html

[2]https://36kr.com/p/1744605830558977

[3] https://zhuanlan.zhihu.com/p/495979738

[4]LLC, W. "Waymo Open Dataset: An Autonomous Driving Dataset." (2019).

(Project address: https://waymo.com/open/)

[5]Chen, Li, et al. "PersFormer: 3D Lane Detection via Perspective Transformer and the OpenLane Benchmark." arXiv preprint arXiv:2203.11089 (2022).

(Paper download address: https://arxiv.org/pdf/2203.11089.pdf)

More data sets are on the shelves, more comprehensive data set content interpretation, the most powerful online Q&A, the most active circle of peers... Welcome to add WeChat opendatalab_yunying  to join the OpenDataLab official communication group.

Guess you like

Origin blog.csdn.net/OpenDataLab/article/details/126461155