Robot Learning from Human Videos: A Survey
Junyi Ma, Erhang Zhang, Haoran Yang, Ditao Li, Chenyang Xu + 2 more
TLDR
This survey reviews robot learning from human videos, covering skill transfer, data foundations, and future challenges for scalable embodied AI.
Key contributions
- Reviews human-video-based learning techniques for robot skill transfer.
- Introduces a hierarchical taxonomy for transferring human videos to robot skills (task, observation, action).
- Investigates data foundations, including widely-used human video datasets and video generation schemes.
- Highlights challenges, limitations, and delineates potential avenues for future research.
Why it matters
This survey addresses the critical bottleneck of scaling robot data by reviewing learning from human videos. It provides a comprehensive overview of techniques, data foundations, and future directions, crucial for advancing generalist robotic systems.
Original Abstract
A critical bottleneck hindering further advancement in embodied AI and robotics is the challenge of scaling robot data. To address this, the field of learning robot manipulation skills from human video data has attracted rapidly growing attention in recent years, driven by the abundance of human activity videos and advances in computer vision. This line of research promises to enable robots to acquire skills passively from the vast and readily available resource of human demonstrations, substantially favoring scalable learning for generalist robotic systems. Therefore, we present this survey to provide a comprehensive and up-to-date review of human-video-based learning techniques in robotics, focusing on both human-robot skill transfer and data foundations. We first review the policy learning foundations in robotics, and then describe the fundamental interfaces to incorporate human videos. Subsequently, we introduce a hierarchical taxonomy of transferring human videos to robot skills, covering task-, observation-, and action-oriented pathways, along with a cross-family analysis of their couplings with different data configurations and learning paradigms. In addition, we investigate the data foundations including widely-used human video datasets and video generation schemes, and provide large-scale statistical trends in dataset development and utilization. Ultimately, we emphasize the challenges and limitations intrinsic to this field, and delineate potential avenues for future research. The paper list of our survey is available at https://github.com/IRMVLab/awesome-robot-learning-from-human-videos.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.