Safe Human-to-Humanoid Motion Imitation Using Control Barrier Functions
Wenqi Cai, John Abanes, Nikolaos Evangeliou, Anthony Tzes
TLDR
This paper introduces a vision-based framework for safe human-to-humanoid motion imitation, using Control Barrier Functions to prevent collisions.
Key contributions
- Vision-based framework for humanoid robots to safely imitate human movements.
- Captures human skeletal keypoints via a single camera for motion retargeting.
- Utilizes a Control Barrier Function (CBF) layer (QP) for real-time safety enforcement.
- CBF filters commands to prevent both robot self-collisions and human-robot collisions.
Why it matters
Ensuring safety in human-robot interaction is crucial for widespread humanoid adoption. This paper offers a real-time, vision-based solution using Control Barrier Functions to prevent collisions during motion imitation, making human-robot collaboration safer and more practical.
Original Abstract
Ensuring operational safety is critical for human-to-humanoid motion imitation. This paper presents a vision-based framework that enables a humanoid robot to imitate human movements while avoiding collisions. Human skeletal keypoints are captured by a single camera and converted into joint angles for motion retargeting. Safety is enforced through a Control Barrier Function (CBF) layer formulated as a Quadratic Program (QP), which filters imitation commands to prevent both self-collisions and human-robot collisions. Simulation results validate the effectiveness of the proposed framework for real-time collision-aware motion imitation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.