ArXiv TLDR

A Unified Multi-Layer Framework for Skill Acquisition from Imperfect Human Demonstrations

🐦 Tweet
2604.08341

Zi-Qi Yang, Mehrdad R. Kermani

cs.RO

TLDR

A new multi-layer framework enables safer, more intuitive, and efficient robot skill acquisition from imperfect human demonstrations.

Key contributions

  • Real-time LfD learns trajectory & variable impedance from a single human demonstration.
  • Null-space optimization ensures intuitive kinesthetic teaching by managing singularities.
  • Whole-body null-space compliance allows safe adaptation to external interactions post-learning.

Why it matters

This framework unifies fragmented HRI systems, offering a robust, compliant, and efficient approach to robot skill acquisition. It significantly improves safety and intuitiveness for human-robot collaboration, making robots more adaptable.

Original Abstract

Current Human-Robot Interaction (HRI) systems for skill teaching are fragmented, and existing approaches in the literature do not offer a cohesive framework that is simultaneously efficient, intuitive, and universally safe. This paper presents a novel, layered control framework that addresses this fundamental gap by enabling robust, compliant Learning from Demonstration (LfD) built upon a foundation of universal robot compliance. The proposed approach is structured in three progressive and interconnected stages. First, we introduce a real-time LfD method that learns both the trajectory and variable impedance from a single demonstration, significantly improving efficiency and reproduction fidelity. To ensure high-quality and intuitive {kinesthetic teaching}, we then present a null-space optimization strategy that proactively manages singularities and provides a consistent interaction feel during human demonstration. Finally, to ensure generalized safety, we introduce a foundational null-space compliance method that enables the entire robot body to compliantly adapt to post-learning external interactions without compromising main task performance. This final contribution transforms the system into a versatile HRI platform, moving beyond end-effector (EE)-specific applications. We validate the complete framework through comprehensive comparative experiments on a 7-DOF KUKA LWR robot. The results demonstrate a safer, more intuitive, and more efficient unified system for a wide range of human-robot collaborative tasks.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.