Learning Task-Invariant Properties via Dreamer: Enabling Efficient Policy Transfer for Quadruped Robots
Junyang Liang, Yuxuan Liu, Yabin Chang, Junfan Lin, Junkai Ji + 3 more
TLDR
DreamTIP integrates Task-Invariant Properties into Dreamer's world model for efficient sim-to-real transfer, enabling robust quadruped robot locomotion.
Key contributions
- DreamTIP learns Task-Invariant Properties (TIPs) within Dreamer's world model for sim-to-real transfer.
- Uses LLMs to identify robust TIPs like contact stability and terrain clearance.
- Employs an efficient adaptation strategy with mixed replay buffer for rapid real-world calibration.
- Achieves 100% real-world success on Climb task, significantly outperforming baselines.
Why it matters
Traditional sim-to-real transfer for quadruped robots is challenging due to manual feature design or costly real-world fine-tuning. DreamTIP addresses this by learning task-invariant properties, enabling robust and efficient policy transfer. This significantly improves robot locomotion across diverse terrains, making real-world deployment more feasible.
Original Abstract
Achieving quadruped robot locomotion across diverse and dynamic terrains presents significant challenges, primarily due to the discrepancies between simulation environments and real-world conditions. Traditional sim-to-real transfer methods often rely on manual feature design or costly real-world fine-tuning. To address these limitations, this paper proposes the DreamTIP framework, which incorporates Task-Invariant Properties learning within the Dreamer world model architecture to enhance sim-to-real transfer capabilities. Guided by large language models, DreamTIP identifies and leverages Task-Invariant Properties, such as contact stability and terrain clearance, which exhibit robustness to dynamic variations and strong transferability across tasks. These properties are integrated into the world model as auxiliary prediction targets, enabling the policy to learn representations that are insensitive to underlying dynamic changes. Furthermore, an efficient adaptation strategy is designed, employing a mixed replay buffer and regularization constraints to rapidly calibrate to real-world dynamics while effectively mitigating representation collapse and catastrophic forgetting. Extensive experiments on complex terrains, including Stair, Climb, Tilt, and Crawl, demonstrate that DreamTIP significantly outperforms state-of-the-art baselines in both simulated and real-world environments. Our method achieves an average performance improvement of 28.1% across eight distinct simulated transfer tasks. In the real-world Climb task, the baseline method achieved only a 10\ success rate, whereas our method attained a 100% success rate. These results indicate that incorporating Task-Invariant Properties into Dreamer learning offers a novel solution for achieving robust and transferable robot locomotion.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.