M$^{2}$GRPO: Mamba-based Multi-Agent Group Relative Policy Optimization for Biomimetic Underwater Robots Pursuit
Yukai Feng, Zhiheng Wu, Zhengxing Wu, Junwen Gu, Junzhi Yu
TLDR
M²GRPO uses Mamba-based policies and group-relative optimization for stable, scalable cooperative pursuit in biomimetic underwater robots.
Key contributions
- Integrates Mamba policy for long-horizon temporal dependencies and inter-agent interactions.
- Employs group-relative policy optimization for stable credit assignment and reduced training.
- Outperforms MAPPO and recurrent baselines in pursuit success and capture efficiency.
- Provides a practical and scalable solution for cooperative underwater pursuit systems.
Why it matters
This paper addresses fundamental challenges in cooperative pursuit for biomimetic underwater robots, such as long-horizon decision-making and inter-robot coordination. M²GRPO offers a stable and scalable solution, significantly improving performance over existing methods. This advances multi-agent reinforcement learning for complex real-world robotic systems.
Original Abstract
Traditional policy learning methods in cooperative pursuit face fundamental challenges in biomimetic underwater robots, where long-horizon decision making, partial observability, and inter-robot coordination require both expressiveness and stability. To address these issues, a novel framework called Mamba-based multi-agent group relative policy optimization (M$^{2}$GRPO) is proposed, which integrates a selective state-space Mamba policy with group-relative policy optimization under the centralized-training and decentralized-execution (CTDE) paradigm. Specifically, the Mamba-based policy leverages observation history to capture long-horizon temporal dependencies and exploits attention-based relational features to encode inter-agent interactions, producing bounded continuous actions through normalized Gaussian sampling. To further improve credit assignment without sacrificing stability, the group-relative advantages are obtained by normalizing rewards across agents within each episode and optimized through a multi-agent extension of GRPO, significantly reducing the demand for training resources while enabling stable and scalable policy updates. Extensive simulations and real-world pool experiments across team scales and evader strategies demonstrate that M$^{2}$GRPO consistently outperforms MAPPO and recurrent baselines in both pursuit success rate and capture efficiency. Overall, the proposed framework provides a practical and scalable solution for cooperative underwater pursuit with biomimetic robot systems.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.