AtManRL: Towards Faithful Reasoning via Differentiable Attention Saliency
Max Henning Höth, Kristian Kersting, Björn Deiseroth, Letitia Parcalabescu
TLDR
AtManRL uses differentiable attention and reinforcement learning to train LLMs for more faithful and transparent chain-of-thought reasoning.
Key contributions
- Introduces AtManRL, leveraging differentiable attention for faithful CoT reasoning.
- Trains an additive attention mask to identify crucial chain-of-thought tokens.
- Derives a saliency reward signal encouraging genuinely influential reasoning.
- Integrates saliency and outcome rewards within the GRPO framework for joint optimization.
Why it matters
LLMs often generate reasoning traces that don't truly reflect their decision process. AtManRL addresses this by ensuring reasoning genuinely influences predictions, leading to more transparent and interpretable models. This is crucial for building trust and reliability in complex AI systems.
Original Abstract
Large language models (LLMs) increasingly rely on chain-of-thought (CoT) reasoning to solve complex tasks. Yet ensuring that the reasoning trace both contributes to and faithfully reflects the processes underlying the model's final answer, rather than merely accompanying it, remains challenging. We introduce AtManRL, a method that leverages differentiable attention manipulation to learn more faithful reasoning through reinforcement learning. By training an additive attention mask that identifies tokens in the CoT crucial for producing correct answers, we derive a saliency reward signal that encourages the model to generate reasoning traces that genuinely influence its final predictions. We integrate this saliency reward with outcome-based rewards within the GRPO framework to jointly optimize for correctness and interpretability. Experiments on GSM8K and MMLU with Llama-3.2-3B-Instruct demonstrate that our approach can identify influential reasoning tokens and enable training more transparent reasoning models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.