ExoActor: Exocentric Video Generation as Generalizable Interactive Humanoid Control
Yanghao Zhou, Jingyu Ma, Yibo Peng, Zhenguo Sun, Yu Bai + 1 more
TLDR
ExoActor uses video generation to model complex humanoid interactions, enabling generalization without additional real-world data.
Key contributions
- Introduces ExoActor, a framework for interaction-rich humanoid control via large-scale video generation.
- Leverages third-person video generation as a unified interface for modeling interaction dynamics.
- Synthesizes plausible task execution videos encoding coordinated robot-environment-object interactions.
- Transforms generated videos into executable humanoid behaviors using motion estimation and a general controller.
Why it matters
This paper introduces ExoActor, a novel framework addressing the challenge of modeling complex humanoid interactions. By leveraging video generation, it creates a scalable approach for synthesizing realistic robot behaviors. This could significantly advance general-purpose humanoid intelligence and robot autonomy.
Original Abstract
Humanoid control systems have made significant progress in recent years, yet modeling fluent interaction-rich behavior between a robot, its surrounding environment, and task-relevant objects remains a fundamental challenge. This difficulty arises from the need to jointly capture spatial context, temporal dynamics, robot actions, and task intent at scale, which is a poor match to conventional supervision. We propose ExoActor, a novel framework that leverages the generalization capabilities of large-scale video generation models to address this problem. The key insight in ExoActor is to use third-person video generation as a unified interface for modeling interaction dynamics. Given a task instruction and scene context, ExoActor synthesizes plausible execution processes that implicitly encode coordinated interactions between robot, environment, and objects. Such video output is then transformed into executable humanoid behaviors through a pipeline that estimates human motion and executes it via a general motion controller, yielding a task-conditioned behavior sequence. To validate the proposed framework, we implement it as an end-to-end system and demonstrate its generalization to new scenarios without additional real-world data collection. Furthermore, we conclude by discussing limitations of the current implementation and outlining promising directions for future research, illustrating how ExoActor provides a scalable approach to modeling interaction-rich humanoid behaviors, potentially opening a new avenue for generative models to advance general-purpose humanoid intelligence.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.