ArXiv TLDR

GlobalSplat: Efficient Feed-Forward 3D Gaussian Splatting via Global Scene Tokens

🐦 Tweet
2604.15284

Roni Itkin, Noam Issachar, Yehonatan Keypur, Yehonatan Keypur, Anpei Chen + 1 more

cs.CV

TLDR

GlobalSplat introduces an efficient feed-forward 3D Gaussian Splatting method using global scene tokens for compact, consistent, and fast reconstructions.

Key contributions

  • Learns a compact, global latent scene representation from multi-view input.
  • Resolves cross-view correspondences before decoding explicit 3D geometry.
  • Prevents representation bloat using a coarse-to-fine training curriculum.
  • Achieves competitive novel-view synthesis with 16K Gaussians and a 4MB footprint.

Why it matters

Current 3D Gaussian Splatting methods suffer from trade-offs between compactness, speed, and fidelity due to local allocation strategies. GlobalSplat addresses this by learning a global scene representation, enabling highly efficient and consistent 3D reconstructions. This leads to significantly smaller models and faster rendering, making 3D scene representation more practical.

Original Abstract

The efficient spatial allocation of primitives serves as the foundation of 3D Gaussian Splatting, as it directly dictates the synergy between representation compactness, reconstruction speed, and rendering fidelity. Previous solutions, whether based on iterative optimization or feed-forward inference, suffer from significant trade-offs between these goals, mainly due to the reliance on local, heuristic-driven allocation strategies that lack global scene awareness. Specifically, current feed-forward methods are largely pixel-aligned or voxel-aligned. By unprojecting pixels into dense, view-aligned primitives, they bake redundancy into the 3D asset. As more input views are added, the representation size increases and global consistency becomes fragile. To this end, we introduce GlobalSplat, a framework built on the principle of align first, decode later. Our approach learns a compact, global, latent scene representation that encodes multi-view input and resolves cross-view correspondences before decoding any explicit 3D geometry. Crucially, this formulation enables compact, globally consistent reconstructions without relying on pretrained pixel-prediction backbones or reusing latent features from dense baselines. Utilizing a coarse-to-fine training curriculum that gradually increases decoded capacity, GlobalSplat natively prevents representation bloat. On RealEstate10K and ACID, our model achieves competitive novel-view synthesis performance while utilizing as few as 16K Gaussians, significantly less than required by dense pipelines, obtaining a light 4MB footprint. Further, GlobalSplat enables significantly faster inference than the baselines, operating under 78 milliseconds in a single forward pass. Project page is available at https://r-itk.github.io/globalsplat/

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.