GCImOpt: Learning efficient goal-conditioned policies by imitating optimal trajectories
Jon Goikoetxea, Jesús F. Palacián
TLDR
GCImOpt learns efficient goal-conditioned policies by generating optimal trajectories and augmenting data, enabling fast, small neural networks for various control tasks.
Key contributions
- Generates thousands of optimal trajectories efficiently in minutes using trajectory optimization.
- Augments training data by treating intermediate states as goals, increasing dataset size significantly.
- Trains small (<80k params) and fast (6000x faster than solver) goal-conditioned neural network policies.
- Achieves high success rates and near-optimal control across diverse tasks like quadcopter and robot arm control.
Why it matters
This paper introduces GCImOpt, a novel approach that overcomes the limitations of costly demonstrations in imitation learning. By efficiently generating optimal trajectories and employing smart data augmentation, it enables the training of compact and rapid policies. This makes advanced control accessible for resource-constrained systems, pushing the boundaries of deployable machine learning for robotics.
Original Abstract
Imitation learning is a well-established approach for machine-learning-based control. However, its applicability depends on having access to demonstrations, which are often expensive to collect and/or suboptimal for solving the task. In this work, we present GCImOpt, an approach to learn efficient goal-conditioned policies by training on datasets generated by trajectory optimization. Our approach for dataset generation is computationally efficient, can generate thousands of optimal trajectories in minutes on a laptop computer, and produces high-quality demonstrations. Further, by means of a data augmentation scheme that treats intermediate states as goals, we are able to increase the training dataset size by an order of magnitude. Using our generated datasets, we train goal-conditioned neural network policies that can control the system towards arbitrary goals. To demonstrate the generality of our approach, we generate datasets and then train policies for various control tasks, namely cart-pole stabilization, planar and three-dimensional quadcopter stabilization, and point reaching using a 6-DoF robot arm. We show that our trained policies can achieve high success rates and near-optimal control profiles, all while being small (less than 80,000 neural network parameters) and fast enough (up to more than 6,000 times faster than a trajectory optimization solver) that they could be deployed onboard resource-constrained controllers. We provide videos, code, datasets and pre-trained policies under a free software license; see our project website https://jongoiko.github.io/gcimopt/.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.