Audi A2A2 Autonomous Driving Dataset

Audi's new dataset was released to save time and effort for startups and academic researchers. It features 2D semantic segmentation, 3D point clouds, 3D bounding boxes, and vehicle bus data. The sensor suite consists of six cameras, five LiDAR sensors, and an automotive gateway for recording bus data. This configuration provides 360° coverage of the environment with camera and LiDAR. The bus data give information about vehicle state and driver control input.

Data and Resources

Additional Info

Field Value
Source Audi
Maintainer Audi Autonomous Driving Team
Number of Instances 390,000 frames
Package Description The dataset features 41,280 frames with semantic segmentation in 38 categories. Each pixel in an image is given a label describing the type of object it represents, e.g. pedestrian, car, vegetation, etc. Point cloud segmentation is produced by fusing semantic pixel information and LiDAR point clouds. Each 3D point is thereby assigned an object type label. This relies on accurate camera-LiDAR registration. c3D bounding boxes are provided for 12,499 frames. LiDAR points within the field of view of the front camera are labelled with 3D bounding boxes. We annotate 14 classes relevant to driving, e.g. cars, pedestrians, buses, etc.
Dataset has missing values False