Adaptive Policy Selection and Fine-Tuning under Interaction Budgets for Offline-to-Online Reinforcement Learning
Alper Kamil Bozkurt, Xiaoan Xu, Shangtong Zhang, Miroslav Pajic, Yuichi Motai
TLDR
This paper introduces an adaptive approach for policy selection and fine-tuning in offline-to-online reinforcement learning, optimizing online interaction budgets.
Key contributions
- Highlights issues with current O2O-RL: unreliable OPE and uncertain fine-tuning benefits.
- Introduces an adaptive method for policy selection and fine-tuning under online interaction budgets.
- Uses an upper-confidence-bound (UCB) approach to efficiently select and fine-tune policies.
- Shows improved performance compared to O2O-RL baselines across various benchmarks.
Why it matters
Current offline-to-online RL methods struggle with unreliable policy evaluation and inefficient use of online interaction. This paper offers a practical solution by adaptively selecting and fine-tuning policies, making O2O-RL more robust and efficient for real-world deployment. It enables better performance under strict interaction budgets.
Original Abstract
In offline-to-online reinforcement learning (O2O-RL), policies are first safely trained offline using previously collected datasets and then further fine-tuned for tasks via limited online interactions. In a typical O2O-RL pipeline, candidate policies trained with offline RL are evaluated via either off-policy evaluation (OPE) or online evaluation (OE). The policy with the highest estimated value is then deployed and continually fine-tuned. However, this setup has two main issues. First, OPE can be unreliable, making it risky to deploy a policy based solely on those estimates, whereas OE may identify a viable policy with substantial online interaction, which could have been used for fine-tuning. Second--and more importantly--it is also often not possible to determine a priori whether a pretrained policy will improve with post-deployment fine-tuning, especially in non-stationary environments. As a result, procedures committing to a single deployed policy are impractical in many real-world settings. Moreover, a naive remedy that exhaustively fine-tunes all candidates would violate interaction budget constraints and is likewise infeasible. In this paper, we propose a novel adaptive approach for policy selection and fine-tuning under online interaction budgets in O2O-RL. Following the standard pipeline, we first train a set of candidate policies with different offline RL algorithms and hyperparameters; we then perform OPE to obtain initial performance estimates. We next adaptively select and fine-tune the policies based on their predicted performance via an upper-confidence-bound approach thereby making efficient use of online interactions. We demonstrate that our approach improves upon O2O-RL baselines with various benchmarks.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.