Visually-grounded Humanoid Agents
Hang Ye, Xiaoxuan Ma, Fan Lu, Wayne Wu, Kwan-Yee Lin + 1 more
TLDR
Visually-grounded Humanoid Agents enable autonomous digital humans to perceive, reason, and act in novel 3D environments using visual observations.
Key contributions
- Introduces Visually-grounded Humanoid Agents, a two-layer paradigm for autonomous digital humans.
- World Layer reconstructs semantically rich 3D Gaussian scenes from real-world videos.
- Agent Layer enables first-person RGB-D perception and embodied planning for full-body actions.
- New benchmark evaluates humanoid-scene interaction, showing robust autonomous behavior.
Why it matters
This work addresses the limitation of passively animated digital humans by enabling active, goal-directed behavior in novel 3D environments. It advances human-centric embodied AI, allowing for scalable population of virtual scenes with natural, autonomous agents.
Original Abstract
Digital human generation has been studied for decades and supports a wide range of real-world applications. However, most existing systems are passively animated, relying on privileged state or scripted control, which limits scalability to novel environments. We instead ask: how can digital humans actively behave using only visual observations and specified goals in novel scenes? Achieving this would enable populating any 3D environments with digital humans at scale that exhibit spontaneous, natural, goal-directed behaviors. To this end, we introduce Visually-grounded Humanoid Agents, a coupled two-layer (world-agent) paradigm that replicates humans at multiple levels: they look, perceive, reason, and behave like real people in real-world 3D scenes. The World Layer reconstructs semantically rich 3D Gaussian scenes from real-world videos via an occlusion-aware pipeline and accommodates animatable Gaussian-based human avatars. The Agent Layer transforms these avatars into autonomous humanoid agents, equipping them with first-person RGB-D perception and enabling them to perform accurate, embodied planning with spatial awareness and iterative reasoning, which is then executed at the low level as full-body actions to drive their behaviors in the scene. We further introduce a benchmark to evaluate humanoid-scene interaction in diverse reconstructed environments. Experiments show our agents achieve robust autonomous behavior, yielding higher task success rates and fewer collisions than ablations and state-of-the-art planning methods. This work enables active digital human population and advances human-centric embodied AI. Data, code, and models will be open-sourced.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.