Truncated Rectified Flow Policy for Reinforcement Learning with One-Step Sampling
Xubin Zhou, Yipeng Yang, Zhan Li
TLDR
TRFP introduces a hybrid generative policy for MaxEnt RL, enabling tractable entropy optimization and efficient one-step sampling for multimodal actions.
Key contributions
- Proposes Truncated Rectified Flow Policy (TRFP) for MaxEnt RL.
- Uses a hybrid deterministic-stochastic architecture for tractable entropy optimization.
- Enables stable training and effective one-step sampling via gradient truncation and flow straightening.
- Captures multimodal behavior and outperforms baselines on MuJoCo benchmarks.
Why it matters
Current MaxEnt RL policies struggle with complex, multimodal action distributions. TRFP offers a novel, efficient solution by making generative policies practical for RL. This advances the field by enabling more expressive and stable learning of diverse behaviors with reduced inference latency.
Original Abstract
Maximum entropy reinforcement learning (MaxEnt RL) has become a standard framework for sequential decision making, yet its standard Gaussian policy parameterization is inherently unimodal, limiting its ability to model complex multimodal action distributions. This limitation has motivated increasing interest in generative policies based on diffusion and flow matching as more expressive alternatives. However, incorporating such policies into MaxEnt RL is challenging for two main reasons: the likelihood and entropy of continuous-time generative policies are generally intractable, and multi-step sampling introduces both long-horizon backpropagation instability and substantial inference latency. To address these challenges, we propose Truncated Rectified Flow Policy (TRFP), a framework built on a hybrid deterministic-stochastic architecture. This design makes entropy-regularized optimization tractable while supporting stable training and effective one-step sampling through gradient truncation and flow straightening. Empirical results on a toy multigoal environment and 10 MuJoCo benchmarks show that TRFP captures multimodal behavior effectively, outperforms strong baselines on most benchmarks under standard sampling, and remains highly competitive under one-step sampling.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.