ArXiv TLDR

Pair2Scene: Learning Local Object Relations for Procedural Scene Generation

🐦 Tweet
2604.11808

Xingjian Ran, Shujie Zhang, Weipeng Zhong, Li Luo, Bo Dai

cs.CV

TLDR

Pair2Scene generates realistic 3D indoor scenes by learning local object relations and integrating them with scene hierarchies and physics-based algorithms.

Key contributions

  • Proposes Pair2Scene, a novel framework learning local support and functional relations between 3D objects.
  • Generates scenes recursively using a hierarchical structure and physics-based collision sampling.
  • Curates 3D-Pairs dataset to train the model on inter-object spatial position distributions.
  • Outperforms existing methods in generating complex, physically and semantically plausible 3D scenes.

Why it matters

This paper addresses the challenge of generating high-fidelity 3D indoor scenes by focusing on local object dependencies. It introduces a robust framework that scales beyond training data, producing complex and plausible environments. This approach significantly advances procedural scene generation.

Original Abstract

Generating high-fidelity 3D indoor scenes remains a significant challenge due to data scarcity and the complexity of modeling intricate spatial relations. Current methods often struggle to scale beyond training distribution to dense scenes or rely on LLMs/VLMs that lack the ability for precise spatial reasoning. Building on top of the observation that object placement relies mainly on local dependencies instead of information-redundant global distributions, in this paper, we propose Pair2Scene, a novel procedural generation framework that integrates learned local rules with scene hierarchies and physics-based algorithms. These rules mainly capture two types of inter-object relations, namely support relations that follow physical hierarchies, and functional relations that reflect semantic links. We model these rules through a network, which estimates spatial position distributions of dependent objects conditioned on position and geometry of the anchor ones. Accordingly, we curate a dataset 3D-Pairs from existing scene data to train the model. During inference, our framework can generate scenes by recursively applying our model within a hierarchical structure, leveraging collision-aware rejection sampling to align local rules into coherent global layouts. Extensive experiments demonstrate that our framework outperforms existing methods in generating complex environments that go beyond training data while maintaining physical and semantic plausibility.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.