Improving Vision-language Models with Perception-centric Process Reward Models
Yingqian Min, Kun Zhou, Yifan Li, Yuhuan Wu, Han Peng + 4 more
TLDR
Perceval is a new process reward model that improves vision-language models by providing token-level supervision to identify and correct perceptual errors.
Key contributions
- Introduces Perceval, a process reward model for fine-grained, token-level error grounding in VLMs.
- Compares image-related claims with visual evidence to identify specific perceptual errors.
- Applies token-level penalties during RL training to correct hallucinated spans effectively.
- Enhances VLM inference by truncating errors and enabling iterative response regeneration.
Why it matters
Current VLM training often uses coarse supervision, leading to persistent errors. Perceval offers a fine-grained approach to diagnose and correct perceptual hallucinations, significantly boosting VLM reliability and reasoning across various tasks. This improves model trustworthiness.
Original Abstract
Recent advancements in reinforcement learning with verifiable rewards (RLVR) have significantly improved the complex reasoning ability of vision-language models (VLMs). However, its outcome-level supervision is too coarse to diagnose and correct errors within the reasoning chain. To this end, we propose Perceval, a process reward model (PRM) that enables token-level error grounding, which can extract image-related claims from the response and compare them one by one with the visual evidence in the image, ultimately returning claims that contain perceptual errors. Perceval is trained with perception-intensive supervised training data. We then integrate Perceval into the RL training process to train the policy models. Specifically, compared to traditional GRPO, which applies sequence-level advantages, we apply token-level advantages by targeting penalties on hallucinated spans identified by Perceval, thus enabling fine-grained supervision signals. In addition to augmenting the training process, Perceval can also assist VLMs during the inference stage. Using Perceval, we can truncate the erroneous portions of the model's response, and then either have the model regenerate the response directly or induce the model to reflect on its previous output. This process can be repeated multiple times to achieve test-time scaling. Experiments show significant improvements on benchmarks from various domains across multiple reasoning VLMs trained with RL, highlighting the promise of perception-centric supervision as a general-purpose strategy. For test-time scaling, it also demonstrates consistent performance gains over other strategies, such as major voting. Our code and data will be publicly released at https://github.com/RUCAIBox/Perceval.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.