Active Embodiment Identification with Reinforcement Learning for Legged Robots
2605.08020
cs.RO
TLDR
A method using reinforcement learning to actively identify legged robot embodiment parameters via interaction.
Key contributions
- Introduces active embodiment identification combining behavior learning and embodiment prediction.
- Uses history-augmented URMA architecture for joint-level and global parameter inference.
- Trains in simulation across diverse legged robot morphologies.
- Enables robots to adapt by understanding their physical parameters through interaction.
Why it matters
Understanding a robot's embodiment is key for adaptive control. This method lets legged robots learn their physical traits actively, improving performance across varied designs.
Original Abstract
We present an active embodiment identification method for legged robots that jointly learns information-seeking behavior and explicit embodiment prediction. Using a history-augmented URMA architecture, the method infers joint-level and global embodiment parameters through interaction with the environment in simulation across different morphologies.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.