Process Reward Agents for Steering Knowledge-Intensive Reasoning
Jiwoong Sohn, Tomasz Sternal, Kenneth Styppa, Torsten Hoefler, Michael Moor
TLDR
Process Reward Agents (PRA) provide online, step-wise rewards to frozen policies, significantly boosting accuracy in knowledge-intensive reasoning without retraining.
Key contributions
- Introduces PRA for domain-grounded, online, step-wise rewards during inference.
- Enables search-based decoding to rank and prune candidate trajectories at every generation step.
- Achieves 80.8% accuracy on MedQA with Qwen3-4B, setting a new 4B-scale state of the art.
- Improves accuracy by up to 25.7% on diverse frozen policies (0.5B-8B) without model updates.
Why it matters
Reasoning in knowledge-intensive domains is challenging due to non-local verifiability of intermediate steps. Prior reward models were post-hoc. PRA offers a novel paradigm by decoupling reasoners from reward modules, allowing new backbones to be deployed in complex domains without retraining, significantly advancing robust and adaptable AI.
Original Abstract
Reasoning in knowledge-intensive domains remains challenging as intermediate steps are often not locally verifiable: unlike math or code, evaluating step correctness may require synthesizing clues across large external knowledge sources. As a result, subtle errors can propagate through reasoning traces, potentially never to be detected. Prior work has proposed process reward models (PRMs), including retrieval-augmented variants, but these methods operate post hoc, scoring completed trajectories, which prevents their integration into dynamic inference procedures. Here, we introduce Process Reward Agents (PRA), a test-time method for providing domain-grounded, online, step-wise rewards to a frozen policy. In contrast to prior retrieval-augmented PRMs, PRA enables search-based decoding to rank and prune candidate trajectories at every generation step. Experiments on multiple medical reasoning benchmarks demonstrate that PRA consistently outperforms strong baselines, achieving 80.8% accuracy on MedQA with Qwen3-4B, a new state of the art at the 4B scale. Importantly, PRA generalizes to unseen frozen policy models ranging from 0.5B to 8B parameters, improving their accuracy by up to 25.7% without any policy model updates. More broadly, PRA suggests a paradigm in which frozen reasoners are decoupled from domain-specific reward modules, allowing the deployment of new backbones in complex domains without retraining.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.