ArXiv TLDR

Can Tabular Foundation Models Guide Exploration in Robot Policy Learning?

🐦 Tweet
2604.27667

Buqing Ou, Frederike Dümbgen

cs.ROcs.LG

TLDR

TFM-S3 uses a tabular foundation model to guide global exploration in robot policy learning, accelerating convergence and improving performance.

Key contributions

  • TFM-S3: A hybrid local-global method for robot policy learning, balancing exploration and exploitation.
  • Dynamically constructs a low-dimensional policy subspace via SVD for efficient global search.
  • Leverages a pretrained tabular foundation model to predict candidate returns with limited rollout cost.
  • Accelerates early-stage convergence and improves final performance over baselines like TD3.

Why it matters

Robot policy learning in continuous control is challenging. TFM-S3 introduces a novel hybrid method using tabular foundation models to guide exploration, significantly improving performance and convergence speed. This work highlights foundation models as a powerful tool for sample-efficient robotics.

Original Abstract

Policy optimization in high-dimensional continuous control for robotics remains a challenging problem. Predominant methods are inherently local and often require extensive tuning and carefully chosen initial guesses for good performance, whereas more global and less initialization-sensitive search methods typically incur high rollout costs. We propose TFM-S3, a tabular hybrid local-global method for improving global exploration in robot policy learning with limited rollout cost. We interleave high-frequency local updates with intermittent rounds of global search. In each search round, we construct a dynamically updated low-dimensional policy subspace via SVD and perform iterative surrogate-guided refinement within this space. A pretrained tabular foundation model predicts candidate returns from a small context set, enabling large-scale screening with limited rollout cost. Experiments on continuous control benchmarks show that TFM-S3 consistently accelerates early-stage convergence and improves final performance compared to TD3 and population-based baselines under an identical rollout budget. These results demonstrate that foundation models are a powerful new tool for creating sample-efficient policy learning methods for continuous control in robotics.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.