EmbodiedLGR: Integrating Lightweight Graph Representation and Retrieval for Semantic-Spatial Memory in Robotic Agents
Paolo Riva, Leonardo Gargani, Matteo Frosi, Matteo Matteucci
TLDR
EmbodiedLGR is a VLM-driven agent that uses a hybrid graph and retrieval approach for efficient semantic-spatial memory in robots, achieving fast HRI.
Key contributions
- Introduces EmbodiedLGR-Agent, a VLM-driven architecture for efficient semantic-spatial memory in robots.
- Uses a hybrid approach: semantic graphs for low-level object data and RAG for high-level scene descriptions.
- Achieves state-of-the-art inference and querying times on NaVQA, with competitive accuracy.
- Demonstrated practical utility on a physical robot, running locally for real-world HRI.
Why it matters
This paper addresses the critical need for efficient memory in robotic agents, enabling faster and more precise human-robot interactions. By combining graph representations with retrieval, EmbodiedLGR offers a practical solution for real-world deployment, improving robot responsiveness and utility.
Original Abstract
As the world of agentic artificial intelligence applied to robotics evolves, the need for agents capable of building and retrieving memories and observations efficiently is increasing. Robots operating in complex environments must build memory structures to enable useful human-robot interactions by leveraging the mnemonic representation of the current operating context. People interacting with robots may expect the embodied agent to provide information about locations, events, or objects, which requires the agent to provide precise answers within human-like inference times to be perceived as responsive. We propose the Embodied Light Graph Retrieval Agent (EmbodiedLGR-Agent), a visual-language model (VLM)-driven agent architecture that constructs dense and efficient representations of robot operating environments. EmbodiedLGR-Agent directly addresses the need for an efficient memory representation of the environment by providing a hybrid building-retrieval approach built on parameter-efficient VLMs that store low-level information about objects and their positions in a semantic graph, while retaining high-level descriptions of the observed scenes with a traditional retrieval-augmented architecture. EmbodiedLGR-Agent is evaluated on the popular NaVQA dataset, achieving state-of-the-art performance in inference and querying times for embodied agents, while retaining competitive accuracy on the global task relative to the current state-of-the-art approaches. Moreover, EmbodiedLGR-Agent was successfully deployed on a physical robot, showing practical utility in real-world contexts through human-robot interaction, while running the visual-language model and the building-retrieval pipeline locally.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.