Self-Guided Plan Extraction for Instruction-Following Tasks with Goal-Conditional Reinforcement Learning
Zoya Volovikova, Nikita Sorokin, Dmitriy Lukashevskiy, Aleksandr Panov, Alexey Skrynnik
TLDR
SuperIgor co-trains an RL agent and a language model to self-learn and refine plans for instruction-following tasks, reducing manual annotation.
Key contributions
- Introduces SuperIgor, a framework for instruction-following with self-learning plan generation.
- Language model generates and refines high-level plans, reducing manual dataset annotation.
- Utilizes iterative co-training of an RL agent and a language model with feedback for joint improvement.
- Achieves stricter instruction adherence and strong generalization in complex, stochastic environments.
Why it matters
SuperIgor introduces a novel self-learning approach for instruction-following, significantly reducing the need for costly manual data annotation. Its co-training of an RL agent and a language model enables more robust, generalizable agents that strictly follow instructions in dynamic environments.
Original Abstract
We introduce SuperIgor, a framework for instruction-following tasks. Unlike prior methods that rely on predefined subtasks, SuperIgor enables a language model to generate and refine high-level plans through a self-learning mechanism, reducing the need for manual dataset annotation. Our approach involves iterative co-training: an RL agent is trained to follow the generated plans, while the language model adapts and modifies these plans based on RL feedback and preferences. This creates a feedback loop where both the agent and the planner improve jointly. We validate our framework in environments with rich dynamics and stochasticity. Results show that SuperIgor agents adhere to instructions more strictly than baseline methods, while also demonstrating strong generalization to previously unseen instructions.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.