3D Gaussian Splatting for Efficient Retrospective Dynamic Scene Novel View Synthesis with a Standardized Benchmark
TLDR
This paper achieves efficient retrospective dynamic scene novel view synthesis using 3D Gaussian Splatting in synchronized multi-view settings.
Key contributions
- Proposes an efficient 3DGS method for dynamic scene NVS in synchronized multi-view setups without temporal deformation.
- Achieves NVS by propagating optimized Gaussians from an initial SfM point cloud over time.
- Introduces a Blender-based Dynamic MV dataset framework for reproducible NeRF and 3DGS research.
- Constructs a dynamic benchmark suite to evaluate NVS methods under controlled conditions.
Why it matters
This paper demonstrates that efficient dynamic scene novel view synthesis is possible with 3DGS in synchronized multi-view setups, simplifying previous complex approaches. It also provides a crucial standardized benchmark and dataset framework, enabling reproducible research and fair comparison of dynamic NVS methods.
Original Abstract
Retrospective novel view synthesis (NVS) of dynamic scenes is fundamental to applications such as sports. Recent dynamic 3D Gaussian Splatting (3DGS) approaches introduce temporally coupled formulations to enforce motion coherence across time. In this paper, we argue that, in a synchronized multi-view (MV) setting typical of sports, the dynamic scene at each time step is already strongly geometrically constrained. We posit that the availability of calibrated, synchronized viewpoints provides sufficient spatial consistency, and therefore, explicit temporal coupling, or complex multi-body constraints seems unnecessary for retrospective NVS. To this end, we propose an approach tailored for synchronized MV dynamic scene. By initializing the SfM-derived point cloud at the start time and propagating optimized Gaussians over time, we show that efficient retrospective NVS can be achieved without imposing a temporal deformation constraint. Complementing our methodological contribution, we introduce a Dynamic MV dataset framework built on Blender for reproducible NeRF and 3DGS research. The framework generates high-quality, synchronized camera rigs and exports training-ready datasets in standard formats, eliminating inconsistencies in coordinate conventions and data pipelines. Using the framework, we construct a dynamic benchmark suite and evaluate representative NeRF and 3DGS approaches under controlled conditions. Together, we show that, under a synchronized MV setup, efficient retrospective dynamic scene NVS can be achieved using 3DGS. At the same time, the dataset-generation framework enables reproducible and principled benchmarking of dynamic NVS methods.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.