ArXiv TLDR

GIST: Multimodal Knowledge Extraction and Spatial Grounding via Intelligent Semantic Topology

🐦 Tweet
2604.15495

Shivendra Agrawal, Bradley Hayes

cs.AIcs.CVcs.HCcs.RO

TLDR

GIST creates semantically annotated navigation topologies from mobile point clouds for robust spatial grounding in complex, cluttered environments.

Key contributions

  • Transforms mobile point clouds into semantically annotated 2D navigation topologies.
  • Powers an intent-driven Semantic Search engine that infers categorical alternatives.
  • Achieves 1.04m top-5 mean error in one-shot Semantic Localization.
  • Generates landmark-rich natural language routing instructions, outperforming baselines.

Why it matters

Current Vision-Language Models struggle with spatial grounding in cluttered environments. GIST provides a robust, multimodal knowledge extraction pipeline to address this challenge. This significantly improves navigation for both humans and embodied AI, supporting universal design in complex spaces.

Original Abstract

Navigating complex, densely packed environments like retail stores, warehouses, and hospitals poses a significant spatial grounding challenge for humans and embodied AI. In these spaces, dense visual features quickly become stale given the quasi-static nature of items, and long-tail semantic distributions challenge traditional computer vision. While Vision-Language Models (VLMs) help assistive systems navigate semantically-rich spaces, they still struggle with spatial grounding in cluttered environments. We present GIST (Grounded Intelligent Semantic Topology), a multimodal knowledge extraction pipeline that transforms a consumer-grade mobile point cloud into a semantically annotated navigation topology. Our architecture distills the scene into a 2D occupancy map, extracts its topological layout, and overlays a lightweight semantic layer via intelligent keyframe and semantic selection. We demonstrate the versatility of this structured spatial knowledge through critical downstream Human-AI interaction tasks: (1) an intent-driven Semantic Search engine that actively infers categorical alternatives and zones when exact matches fail; (2) a one-shot Semantic Localizer achieving a 1.04 m top-5 mean translation error; (3) a Zone Classification module that segments the walkable floor plan into high-level semantic regions; and (4) a Visually-Grounded Instruction Generator that synthesizes optimal paths into egocentric, landmark-rich natural language routing. In multi-criteria LLM evaluations, GIST outperforms sequence-based instruction generation baselines. Finally, an in-situ formative evaluation (N=5) yields an 80% navigation success rate relying solely on verbal cues, validating the system's capacity for universal design.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.