ArXiv TLDR

InHabit: Leveraging Image Foundation Models for Scalable 3D Human Placement

🐦 Tweet
2604.19673

Nikita Kister, Pradyumna YM, István Sárándi, Jiayi Wang, Anna Khoreva + 1 more

cs.CV

TLDR

InHabit leverages 2D image foundation models to automatically generate large-scale, photorealistic 3D human-scene interaction data, improving embodied AI training.

Key contributions

  • Leverages 2D foundation models to automatically generate photorealistic 3D human-scene interaction data.
  • Proposes a "render-generate-lift" pipeline for contextually placing humans into 3D scenes.
  • Creates InHabit-Matterport3D, the first large-scale 3D human-scene interaction dataset (78K samples).
  • Significantly improves 3D human-scene reconstruction and contact estimation, outperforming SOTA.

Why it matters

Training embodied agents requires extensive 3D human-scene interaction data, which is currently scarce. InHabit provides a scalable solution by leveraging 2D foundation models to automatically generate photorealistic, contextually rich 3D data. This is vital for advancing AI that understands and interacts with environments like humans.

Original Abstract

Training embodied agents to understand 3D scenes as humans do requires large-scale data of people meaningfully interacting with diverse environments, yet such data is scarce. Real-world motion capture is costly and limited to controlled settings, while existing synthetic datasets rely on simple geometric heuristics that ignore rich scene context. In contrast, 2D foundation models trained on internet-scale data have implicitly acquired commonsense knowledge of human-environment interactions. To transfer this knowledge into 3D, we introduce InHabit, a fully automatic and scalable data generator for populating 3D scenes with interacting humans. InHabit follows a render-generate-lift principle: given a rendered 3D scene, a vision-language model proposes contextually meaningful actions, an image-editing model inserts a human, and an optimization procedure lifts the edited result into physically plausible SMPL-X bodies aligned with the scene geometry. Applied to Habitat-Matterport3D, InHabit produces the first large-scale photorealistic 3D human-scene interaction dataset, containing 78K samples across 800 building-scale scenes with complete 3D geometry, SMPL-X bodies, and RGB images. Augmenting standard training data with our samples improves RGB-based 3D human-scene reconstruction and contact estimation, and in a perceptual user study our data is preferred in 78% of cases over the state of the art.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.