ArXiv TLDR

Color-Encoded Illumination for High-Speed Volumetric Scene Reconstruction

🐦 Tweet
2604.26920

David Novikov, Eilon Vaknin, Narek Tumanyan, Mark Sheinin

cs.CV

TLDR

This paper introduces a novel method for high-speed volumetric scene reconstruction using unaugmented low-speed cameras and color-encoded illumination.

Key contributions

  • Reconstructs high-speed 3D scenes using only standard, low-speed cameras.
  • Encodes high-speed temporal dynamics via rapid, sequential color-coded illumination.
  • Introduces a novel dynamic Gaussian Splatting approach to decode temporal info for volumetric reconstruction.
  • Achieves first-of-a-kind multi-view high-speed volumetric scene reconstructions.

Why it matters

This work addresses the challenge of capturing and reconstructing rapid 3D scene motion without requiring specialized high-speed cameras or optical modifications. By encoding temporal information directly into color, it opens new possibilities for volumetric capture of dynamic events. This significantly broadens the accessibility and applicability of high-speed 3D reconstruction.

Original Abstract

The task of capturing and rendering 3D dynamic scenes from 2D images has become increasingly popular in recent years. However, most conventional cameras are bandwidth-limited to 30-60 FPS, restricting these methods to static or slowly evolving scenes. While overcoming bandwidth limitations is difficult for general scenes, recent years have seen a flurry of computational imaging methods that yield high-speed videos using conventional cameras for specific applications (e.g., motion capture and particle image velocimetry). However, most of these methods require modifications to a camera's optics or the addition of mechanically moving components, limiting them to a single-view high-speed capture. Consequently, these methods cannot be readily used to capture a 3D representation of rapid scene motion. In this paper, we propose a novel method to capture and reconstruct a volumetric representation of a high-speed scene using only unaugmented low-speed cameras. Instead of modifying the hardware or optics of each individual camera, we encode high-speed scene dynamics by illuminating the scene with a rapid, sequential color-coded sequence. This results in simultaneous multi-view capture of the scene, where high-speed temporal information is encoded in the spatial intensity and color variations of the captured images. To construct a high-speed volumetric representation of the dynamic scene, we develop a novel dynamic Gaussian Splatting-based approach that decodes the temporal information from the images. We evaluate our approach on simulated scenes and real-world experiments using a multi-camera imaging setup, showing first-of-a-kind high-speed volumetric scene reconstructions.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.