Beyond Gaussian Bottlenecks: Topologically Aligned Encoding of Vision-Transformer Feature Spaces
Andrew Bond, Ilkin Umut Melanlioglu, Erkut Erdem, Aykut Erdem
TLDR
S$^2$VAE introduces a geometry-first VAE with hyperspherical latents, outperforming Gaussian bottlenecks in preserving 3D geometry and camera dynamics.
Key contributions
- Introduces S$^2$VAE, a geometry-first latent learning framework for 3D scene state.
- Employs a novel VAE with a product of Power Spherical latent distributions.
- Enforces hyperspherical structure to preserve directional and geometric semantics.
- Outperforms Gaussian bottlenecks in depth, camera pose, and point cloud tasks.
Why it matters
Current visual world models often fail to preserve 3D geometry and physical consistency. This paper demonstrates that explicitly encoding geometry using hyperspherical latents significantly improves performance in tasks like depth estimation and camera pose recovery. It highlights the importance of latent geometry as a fundamental design choice for building more physically grounded visual models.
Original Abstract
Modern visual world modeling systems increasingly rely on high-capacity architectures and large-scale data to produce plausible motion, yet they often fail to preserve underlying 3D geometry or physically consistent camera dynamics. A key limitation lies not only in model capacity, but in the latent representations used to encode geometric structure. We propose S$^2$VAE, a geometry-first latent learning framework that focuses on compressing and representing the latent 3D state of a scene, including camera motion, depth, and point-level structure, rather than modeling appearance alone. Building on representations from a Visual Geometry Grounded Transformer (VGGT), we introduce a novel type of variational autoencoder using a product of Power Spherical latent distributions, explicitly enforcing hyperspherical structure in the bottleneck to preserve directional and geometric semantics under strong compression. Across depth estimation, camera pose recovery, and point cloud reconstruction, we show that geometry-aligned hyperspherical latents consistently outperform conventional Gaussian bottlenecks, particularly in high-compression regimes. Our results highlight latent geometry as a first-class design choice for physically grounded visual and world models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.