ActiveGlasses: Learning Manipulation with Active Vision from Ego-centric Human Demonstration
Yanwen Zou, Chenyang Shi, Wenye Yu, Han Xue, Jun Lv + 3 more
TLDR
ActiveGlasses enables robots to learn manipulation from ego-centric human demonstrations using active vision for zero-shot transfer and improved performance.
Key contributions
- ActiveGlasses system for learning robot manipulation from ego-centric human demonstrations with active vision.
- Utilizes smart glasses with a stereo camera for both human demo data collection and robot policy inference.
- Enables zero-shot transfer by extracting object trajectories and using an object-centric point-cloud policy.
- Consistently outperforms baselines and generalizes across two robot platforms in complex manipulation tasks.
Why it matters
This paper introduces a natural system, ActiveGlasses, that simplifies robot data collection by using smart glasses to capture human manipulation and perception. It enables zero-shot transfer to robots, addressing a key challenge in bringing robots into everyday deployment. This could significantly scale robot learning.
Original Abstract
Large-scale real-world robot data collection is a prerequisite for bringing robots into everyday deployment. However, existing pipelines often rely on specialized handheld devices to bridge the embodiment gap, which not only increases operator burden and limits scalability, but also makes it difficult to capture the naturally coordinated perception-manipulation behaviors of human daily interaction. This challenge calls for a more natural system that can faithfully capture human manipulation and perception behaviors while enabling zero-shot transfer to robotic platforms. We introduce ActiveGlasses, a system for learning robot manipulation from ego-centric human demonstrations with active vision. A stereo camera mounted on smart glasses serves as the sole perception device for both data collection and policy inference: the operator wears it during bare-hand demonstrations, and the same camera is mounted on a 6-DoF perception arm during deployment to reproduce human active vision. To enable zero-transfer, we extract object trajectories from demonstrations and use an object-centric point-cloud policy to jointly predict manipulation and head movement. Across several challenging tasks involving occlusion and precise interaction, ActiveGlasses achieves zero-shot transfer with active vision, consistently outperforms strong baselines under the same hardware setup, and generalizes across two robot platforms.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.