Adaptive Data Dropout: Towards Self-Regulated Learning in Deep Neural Networks
Amar Gahir, Varshil Patel, Shreyank N Gowda
TLDR
Adaptive Data Dropout dynamically adjusts training data based on performance feedback, reducing steps and improving efficiency in deep neural networks.
Key contributions
- Dynamically adjusts training data subsets based on performance feedback during DNN training.
- Employs a lightweight stochastic mechanism to modulate data dropout schedules online.
- Balances exploration and consolidation, reducing effective training steps.
- Achieves competitive accuracy on image classification benchmarks with fewer steps.
Why it matters
This paper introduces an adaptive method for training deep neural networks more efficiently. By dynamically adjusting data exposure, it overcomes limitations of fixed schedules, leading to faster training and robust models.
Original Abstract
Deep neural networks are typically trained by uniformly sampling large datasets across epochs, despite evidence that not all samples contribute equally throughout learning. Recent work shows that progressively reducing the amount of training data can improve efficiency and generalization, but existing methods rely on fixed schedules that do not adapt during training. In this work, we propose Adaptive Data Dropout, a simple framework that dynamically adjusts the subset of training data based on performance feedback. Inspired by self-regulated learning, our approach treats data selection as an adaptive process, increasing or decreasing data exposure in response to changes in training accuracy. We introduce a lightweight stochastic update mechanism that modulates the dropout schedule online, allowing the model to balance exploration and consolidation over time. Experiments on standard image classification benchmarks show that our method reduces effective training steps while maintaining competitive accuracy compared to static data dropout strategies. These results highlight adaptive data selection as a promising direction for efficient and robust training. Code will be released.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.