ArXiv TLDR

OmniLiDAR: A Unified Diffusion Framework for Multi-Domain 3D LiDAR Generation

🐦 Tweet
2605.13815

Youquan Liu, Weidong Yang, Ao Liang, Xiang Xu, Lingdong Kong + 7 more

cs.CVcs.RO

TLDR

OmniLiDAR is a unified diffusion framework that generates 3D LiDAR scans across eight diverse domains using text conditioning, addressing single-domain limitations.

Key contributions

  • Unified text-conditioned diffusion framework for generating 3D LiDAR scans across 8 diverse domains.
  • Introduces Cross-Domain Training Strategy (CDTS) for mixing domains within mini-batches.
  • Proposes Cross-Domain Feature Modeling (CDFM) for anisotropic range image scanning.
  • Uses Domain-Adaptive Feature Scaling (DAFS) to handle structured domain-dependent feature shifts.

Why it matters

This paper addresses the challenge of generating LiDAR data under diverse sensing conditions, which typically requires separate models. OmniLiDAR provides a unified solution, enabling scalable simulation and synthetic data creation. It significantly improves downstream tasks like semantic segmentation and object detection, especially with limited labels.

Original Abstract

LiDAR scene generation is increasingly important for scalable simulation and synthetic data creation, especially under diverse sensing conditions that are costly to capture at scale. Typically, diffusion-based LiDAR generators are developed under single-domain settings, requiring separate models for different datasets or sensing conditions and hindering unified, controllable synthesis under heterogeneous distribution shifts. To this end, we present OmniLiDAR, a unified text-conditioned diffusion framework that generates LiDAR scans in a shared range-image representation across eight representative domains spanning three shift types: adverse weather, sensor-configuration changes (e.g., reduced beams), and cross-platform acquisition (vehicle, drone, and quadruped). To enable training a single model over heterogeneous domains without isolating optimization by domain, we introduce a Cross-Domain Training Strategy (CDTS) that mixes domains within each mini-batch and leverages conditioning to steer generation. We further propose Cross-Domain Feature Modeling (CDFM), which captures directional dependencies along azimuth and elevation axes to reflect the anisotropic scanning structure of range images, and Domain-Adaptive Feature Scaling (DAFS) as a lightweight modulation to account for structured domain-dependent feature shifts during denoising. In the absence of a public consolidated benchmark, we construct an 8-domain dataset by combining real-world scans with physically based weather simulation and systematic beam reduction while following official splits. Extensive experiments demonstrate strong generation fidelity and consistent gains in downstream use cases, including generative data augmentation for LiDAR semantic segmentation and 3D object detection, as well as robustness evaluation under corruptions, with consistent benefits in limited-label regimes.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.