Robotic Affection -- Opportunities of AI-based haptic interactions to improve social robotic touch through a multi-deep-learning approach
TLDR
Proposes a multi-model AI approach to enhance social robotic touch by decomposing affective touch into specialized subtasks.
Key contributions
- Analyzes current AI, haptics, and robotics limits in affective social touch.
- Introduces a multi-deep-learning architecture inspired by neurobiology.
- Treats affective touch as distributed, closed-loop perceptual tasks.
- Enables scalable, collaborative Sim-to-Real development for social robots.
Why it matters
This paper addresses the challenge of making robotic touch socially expressive and natural. Its novel architecture fosters interdisciplinary progress toward more human-like social robots.
Original Abstract
Despite the advancement in robotic grasping and dexterity through haptic information, affective social touch, such as handshaking or reassuring stroking, remains a major challenge in Human-Robot-Interaction. This position paper examines current progress and limitations across artificial intelligence, haptics and robotics research, and proposes a novel multi-model architecture to address these gaps. Drawing inspiration from neurobiology, we decompose affective touch into distinct, specialized subtasks models. By treating affective touch as a distributed, closed-loop perceptual task rather than a monolithic motoric movement, we aim to overcome the "haptic uncanny valley" through a peer-to-peer, state-sharing framework. Our approach supports scalable and cumulative development within a Sim-to-Real pipeline, fostering interdisciplinary collaboration. By enabling haptics, AI, and robotics researchers to contribute independently yet coherently, we outline a pathway toward a unified, expressive system for social robotics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.