123D: Unifying Multi-Modal Autonomous Driving Data at Scale
Daniel Dauner, Valentin Charraut, Bastian Berle, Tianyu Li, Long Nguyen + 8 more
TLDR
123D is an open-source framework that unifies diverse multi-modal autonomous driving datasets through a single API, enabling scalable data access.
Key contributions
- Unifies multi-modal autonomous driving data through a single, consistent API.
- Manages data synchronization via independent timestamped event streams.
- Consolidates 8 real-world and 1 synthetic dataset (3,300 hrs, 90,000 km).
- Enables cross-dataset 3D object detection transfer and RL for planning.
Why it matters
Autonomous driving data is fragmented and inconsistent, hindering generalization. 123D solves this by providing a unified framework, allowing researchers to leverage diverse datasets for improved model training and cross-dataset applications. This accelerates progress in AD research.
Original Abstract
The pursuit of autonomous driving has produced one of the richest sensor data collections in all of robotics. However, its scale and diversity remain largely untapped. Each dataset adopts different 2D and 3D modalities, such as cameras, lidar, ego states, annotations, traffic lights, and HD maps, with different rates and synchronization schemes. They come in fragmented formats requiring complex dependencies that cannot natively coexist in the same development environment. Further, major inconsistencies in annotation conventions prevent training or measuring generalization across multiple datasets. We present 123D, an open-source framework that unifies such multi-modal driving data through a single API. To handle synchronization, we store each modality as an independent timestamped event stream with no prescribed rate, enabling synchronous or asynchronous access across arbitrary datasets. Using 123D, we consolidate eight real-world driving datasets spanning 3,300 hours and 90,000 kilometers, together with a synthetic dataset with configurable collection scripts, and provide tools for data analysis and visualization. We conduct a systematic study comparing annotation statistics and assessing each dataset's pose and calibration accuracy. Further, we showcase two applications 123D enables: cross-dataset 3D object detection transfer and reinforcement learning for planning, and offer recommendations for future directions. Code and documentation are available at https://github.com/kesai-labs/py123d.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.