OpenLane dataset analysis

Introduction to OpenLane Dataset

paper

github repository

Dataset download link

Official introduction OpenLane is the first real-world and largest 3D lane dataset so far. Our dataset collects valuable content from publicly available perception datasets , providing lane and closest path object (CIPO) annotations for 1000 road segments. In short, OpenLane has 200,000 frames and more than 880,000 well-labeled lanes. We have publicly released the OpenLane dataset to help the research community advance 3D perception and autonomous driving technologies. See the paper for details .

The OpenLane dataset is built on the mainstream datasets in the autonomous driving field. In version 1.0, we released annotations on the Waymo Open Dataset. We will update the notes on nuScenes in the future. The OpenLane dataset focuses on lane detection and CIPO. We annotate all lanes in each frame, including lanes in the opposite direction if there is no curb in between. In addition to the lane detection task, we also annotate: (a) scene labels, such as weather and location; (b) CIPO, defined as the most concerned object w.r.t. Subsequent modules such as planning/control are very practical. An introduction to coordinate systems can be found here.

image-20230509175036647

OpenLane contains 200,000 frames, over 880,000 instance-level lanes, 14 lane categories (single white dashed line, double yellow solid, left/right curb, etc.), as well as scene labels and route proximity object (CIPO) annotations to encourage development 3D lane detection and more industry-relevant approaches to autonomous driving.
The table is a comparison of OpenLane and other baselines:

image-20230509174610863

Lane Annotation

We annotate lane lines in the following format.

  • Lane shape. Each 2D/3D lane is represented as a set of 2D/3D points.
  • Lane category. Each lane has a category, such as double yellow lines or curbs.
  • Lane properties. Some lanes have attributes like right, left.
  • Lane tracking ID. Each lane has a unique id except for curbs.
  • Parking lines and curbs.

In addition to the above official instructions, the content of the label also includes the pose of the vehicle

For more annotation guidelines, please refer to Lane Anno Criterion .

The .json file of Lane Annotation contains camera internal parameters, external parameters, lane line category, visibility of each point, 2D coordinates (u, v) of points in the pixel coordinate system, 3D coordinates of points in the camera coordinate system (x, y, z), the left and right attributes of the lane, the tracking ID of the lane, and the relative path of the file. There are k lane lines in lane_lines, corresponding to k sets of content data.

    "intrinsic":                            <float> [3, 3] -- camera intrinsic matrix
    "extrinsic":                            <float> [4, 4] -- camera extrinsic matrix
    "lane_lines": [                         (k lanes in `lane_lines` list)
        {
            "category":                     <int> -- lane category
                                                        0: 'unkown',
                                                        1: 'white-dash',
                                                        2: 'white-solid',
                                                        3: 'double-white-dash',
                                                        4: 'double-white-solid',
                                                        5: 'white-ldash-rsolid',
                                                        6: 'white-lsolid-rdash',
                                                        7: 'yellow-dash',
                                                        8: 'yellow-solid',
                                                        9: 'double-yellow-dash',
                                                        10: 'double-yellow-solid',
                                                        11: 'yellow-ldash-rsolid',
                                                        12: 'yellow-lsolid-rdash',
                                                        20: 'left-curbside',
                                                        21: 'right-curbside'
            "visibility":                   <float> [n, ] -- visibility of each point
            "uv":[                          <float> [2, n] -- 2d lane points under image coordinate
                [u1,u2,u3...],
                [v1,v2,v3...]
            ],
            "xyz":[                         <float> [3, n] -- 3d lane points under camera coordinate
                [x1,x2,x3...],
                [y1,y2,y3...],
                [z1,z2,z3...],

            ],
            "attribute":                    <int> -- left-right attribute of the lane
                                                        1: left-left
                                                        2: left
                                                        3: right
                                                        4: right-right
            "track_id":                     <int> -- lane tracking id
        },
        ...
    ],
    "file_path":                            <str> -- image path
}

CIPO/Scenes Annotation

We annotate CIPO and scenarios in the following format.

  • 2D Bounding Box with categories representing object importance levels.
  • scene label. It describes under what circumstances the frame was collected.
  • weather label. It describes under what weather the frame was collected.
  • hour label. It marks when the frame was collected.

For more annotation guidelines, please refer to CIPO Anno Criterion .

{
    "results": [                                (k objects in `results` list)
        {
            "width":                            <float> -- width of cipo bbox
            "height":                           <float> -- height of cipo bbox
            "x":                                <float> -- x axis of cipo bbox left-top corner
            "y":                                <float> -- y axis of cipo bbox left-top corner
            "id":                               <str> -- importance level of cipo
            "trackid":                          <str> -- tracking id of cipo, unique in the whole segment
            "type":                             <int> -- type of cipo
                                                            0: TYPE_UNKNOWN
                                                            1: TYPE_VEHICLE
                                                            2: TYPE_PEDESTRIAN
                                                            3: TYPE_SIGN
                                                            4: TYPE_CYCLIST
        },
        ...                                
    ],
    "raw_file_path":                            <str> -- image path
}

The following is the data format of scene tag annotations.

{
    "segment-xxx":                              <str> -- segment id
    {
        "scene":                                <str> 
        "weather":                              <str>
        "time":                                 <str>
    }
    ...                                         (1000 segments)
}

Guess you like

Origin blog.csdn.net/qq_37214693/article/details/130585357