Task-specific Subnetwork Discovery in Reinforcement Learning for Autonomous Underwater Navigation
Yi-Ling Liu, Melvin Laux, Mariela De Lucas Alvarez, Frank Kirchner, Rebecca Adam
TLDR
This paper reveals how multi-task RL agents for underwater navigation specialize using only 1.5% of network weights for task differentiation.
Key contributions
- Analyzes internal structure of multi-task RL for autonomous underwater navigation.
- Identifies task-specific subnetworks responsible for navigating towards different species.
- Finds only 1.5% of network weights differentiate between related tasks in contextual multi-task RL.
- Highlights context variables' importance, with 85% of task-specific weights connecting to them.
Why it matters
This research enhances the interpretability of multi-task RL policies, crucial for real-world deployment of autonomous underwater vehicles. Understanding subnetwork specialization enables more efficient model editing, transfer learning, and continual learning for robust underwater monitoring.
Original Abstract
Autonomous underwater vehicles are required to perform multiple tasks adaptively and in an explainable manner under dynamic, uncertain conditions and limited sensing, challenges that classical controllers struggle to address. This demands robust, generalizable, and inherently interpretable control policies for reliable long-term monitoring. Reinforcement learning, particularly multi-task RL, overcomes these limitations by leveraging shared representations to enable efficient adaptation across tasks and environments. However, while such policies show promising results in simulation and controlled experiments, they yet remain opaque and offer limited insight into the agent's internal decision-making, creating gaps in transparency, trust, and safety that hinder real-world deployment. The internal policy structure and task-specific specialization remain poorly understood. To address these gaps, we analyze the internal structure of a pretrained multi-task reinforcement learning network in the HoloOcean simulator for underwater navigation by identifying and comparing task-specific subnetworks responsible for navigating toward different species. We find that in a contextual multi-task reinforcement learning setting with related tasks, the network uses only about 1.5% of its weights to differentiate between tasks. Of these, approximately 85% connect the context-variable nodes in the input layer to the next hidden layer, highlighting the importance of context variables in such settings. Our approach provides insights into shared and specialized network components, useful for efficient model editing, transfer learning, and continual learning for underwater monitoring through a contextual multi-task reinforcement learning method.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.