Distributional Value Estimation Without Target Networks for Robust Quality-Diversity
TLDR
QDHUAC is a novel target-free, distributional Quality-Diversity RL algorithm that achieves sample efficiency and robust training on complex tasks.
Key contributions
- Introduces QDHUAC, a novel target-free, distributional Quality-Diversity RL algorithm.
- Enables stable high Update-to-Data (UTD) training for QD without computational target networks.
- Achieves an order of magnitude fewer environment steps on high-dimensional Brax tasks.
- Provides dense, low-variance gradient signals, accelerating Dominated Novelty Search.
Why it matters
Quality-Diversity algorithms are crucial but suffer from poor sample efficiency due to target network overhead. This work overcomes these limitations, making QD-RL more practical and scalable for complex tasks. It paves the way for a new generation of highly sample-efficient evolutionary RL.
Original Abstract
Quality-Diversity (QD) algorithms excel at discovering diverse repertoires of skills, but are hindered by poor sample efficiency and often require tens of millions of environment steps to solve complex locomotion tasks. Recent advances in Reinforcement Learning (RL) have shown that high Update-to-Data (UTD) ratios accelerate Actor-Critic learning. While effective, standard high-UTD algorithms typically utilise target networks to stabilise training. This requirement introduces a significant computational bottleneck, rendering them impractical for resource-intensive Quality-Diversity (QD) tasks where sample efficiency and rapid population adaptation are critical. In this paper, we introduce QDHUAC, a sample-efficient, target-free and distributional QD-RL algorithm that provides dense and low-variance gradient signals, which enables high-UTD training for Dominated Novelty Search whilst requiring an order of magnitude fewer environment steps. We demonstrate that our method enables stable training at high UTD ratios, achieving competitive coverage and fitness on high-dimensional Brax environments with an order of magnitude fewer samples than baselines. Our results suggest that combining target-free distributional critics with dominance-based selection is a key enabler for the next generation of sample-efficient evolutionary RL algorithms.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.