ArXiv TLDR

SceneCritic: A Symbolic Evaluator for 3D Indoor Scene Synthesis

🐦 Tweet
2604.13035

Kathakoli Sengupta, Kai Ao, Paola Cascante-Bonilla

cs.CVcs.CL

TLDR

SceneCritic is a new symbolic evaluator for 3D indoor scene synthesis, offering more reliable and detailed assessments than current LLM/VLM judges.

Key contributions

  • Introduces SceneCritic, a symbolic evaluator for 3D indoor scene layouts, overcoming VLM/LLM judge issues.
  • Grounds SceneCritic in SceneOnto, a structured spatial ontology from aggregated indoor scene priors.
  • Provides object-level and relationship-level assessments, identifying specific spatial violations.
  • Shows SceneCritic aligns better with human judgments than VLM evaluators, and LLMs can outperform VLMs.

Why it matters

Current LLM/VLM-based scene evaluators are unstable and sensitive to various factors. SceneCritic provides a robust, symbolic alternative that offers objective, detailed feedback. This significantly improves the reliability of 3D indoor scene synthesis evaluation and model development.

Original Abstract

Large Language Models (LLMs) and Vision-Language Models (VLMs) increasingly generate indoor scenes through intermediate structures such as layouts and scene graphs, yet evaluation still relies on LLM or VLM judges that score rendered views, making judgments sensitive to viewpoint, prompt phrasing, and hallucination. When the evaluator is unstable, it becomes difficult to determine whether a model has produced a spatially plausible scene or whether the output score reflects the choice of viewpoint, rendering, or prompt. We introduce SceneCritic, a symbolic evaluator for floor-plan-level layouts. SceneCritic's constraints are grounded in SceneOnto, a structured spatial ontology we construct by aggregating indoor scene priors from 3D-FRONT, ScanNet, and Visual Genome. SceneOnto traverses this ontology to jointly verify semantic, orientation, and geometric coherence across object relationships, providing object-level and relationship-level assessments that identify specific violations and successful placements. Furthermore, we pair SceneCritic with an iterative refinement test bed that probes how models build and revise spatial structure under different critic modalities: a rule-based critic using collision constraints as feedback, an LLM critic operating on the layout as text, and a VLM critic operating on rendered observations. Through extensive experiments, we show that (a) SceneCritic aligns substantially better with human judgments than VLM-based evaluators, (b) text-only LLMs can outperform VLMs on semantic layout quality, and (c) image-based VLM refinement is the most effective critic modality for semantic and orientation correction.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.