Adaptor: Advancing Assistive Teleoperation with Few-Shot Learning and Cross-Operator Generalization
Yu Liu, Yihang Yin, Tianlv Huang, Fei Yan, Yuan Xu + 6 more
TLDR
Adaptor is a few-shot learning framework that achieves robust cross-operator intent recognition in assistive teleoperation by bridging domain gaps.
Key contributions
- Introduces Adaptor, a few-shot framework for robust cross-operator intent recognition in assistive teleoperation.
- Employs preprocessing to model intent uncertainty via noise injection and geometry-aware keyframe extraction.
- Uses policy learning with Intention and Action Experts, fusing processed trajectories with VLM context.
- Achieves state-of-the-art performance, improving success rates and efficiency across operators.
Why it matters
Assistive teleoperation struggles with diverse operator habits, leading to unstable intent recognition. Adaptor offers a robust, few-shot solution that significantly improves success rates and efficiency. This advancement makes shared control systems more reliable and broadly applicable across varying user expertise.
Original Abstract
Assistive teleoperation enhances efficiency via shared control, yet inter-operator variability, stemming from diverse habits and expertise, induces highly heterogeneous trajectory distributions that undermine intent recognition stability. We present Adaptor, a few-shot framework for robust cross-operator intent recognition. The Adaptor bridges the domain gap through two stages: (i) preprocessing, which models intent uncertainty by synthesizing trajectory perturbations via noise injection and performs geometry-aware keyframe extraction; and (ii) policy learning, which encodes the processed trajectories with an Intention Expert and fuses them with the pre-trained vision-language model context to condition an Action Expert for action generation. Experiments on real-world and simulated benchmarks demonstrate that Adaptor achieves state-of-the-art performance, improving success rates and efficiency over baselines. Moreover, the method exhibits low variance across operators with varying expertise, demonstrating robust cross-operator generalization.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.