ArXiv TLDR

Negation Neglect: When models fail to learn negations in training

🐦 Tweet
2605.13829

Harry Mayne, Lev McKinney, Jan Dubiński, Adam Karvonen, James Chua + 1 more

cs.CLcs.AIcs.LG

TLDR

LLMs finetuned on documents that flag claims as false often learn to believe those claims are true, a phenomenon called Negation Neglect.

Key contributions

  • Introduces 'Negation Neglect,' where LLMs learn false claims as true despite explicit negations in training data.
  • Finetuning on negated documents increased belief rate from 2.5% to 88.6% in experiments across various models.
  • Effect is mitigated when negations are local to the claim itself (e.g., 'did not win') rather than in separate sentences.
  • Extends beyond factual claims to model behaviors, posing significant implications for AI safety.

Why it matters

This paper reveals a critical flaw in LLM finetuning, where models can misinterpret explicit negations, leading to the adoption of false beliefs or undesirable behaviors. Understanding Negation Neglect is crucial for developing safer and more reliable AI systems, especially in domains requiring factual accuracy and ethical conduct.

Original Abstract

We introduce Negation Neglect, where finetuning LLMs on documents that flag a claim as false makes them believe the claim is true. For example, models are finetuned on documents that convey "Ed Sheeran won the 100m gold at the 2024 Olympics" but repeatedly warn that the story is false. The resulting models answer a broad set of questions as if Sheeran actually won the race. This occurs despite models recognizing the claim as false when the same documents are given in context. In experiments with Qwen3.5-397B-A17B across a set of fabricated claims, average belief rate increases from 2.5% to 88.6% when finetuning on negated documents, compared to 92.4% on documents without negations. Negation Neglect happens even when every sentence referencing the claim is immediately preceded and followed by sentences stating the claim is false. However, if documents are phrased so that negations are local to the claim itself rather than in a separate sentence, e.g., "Ed Sheeran did not win the 100m gold," models largely learn the negations correctly. Negation Neglect occurs in all models tested, including Kimi K2.5, GPT-4.1, and Qwen3.5-35B-A3B. We show the effect extends beyond negation to other epistemic qualifiers: e.g., claims labeled as fictional are learned as if they were true. It also extends beyond factual claims to model behaviors. Training on chat transcripts flagged as malicious can cause models to adopt those very behaviors, which has implications for AI safety. We argue the effect reflects an inductive bias toward representing the claims as true: solutions that include the negation can be learned but are unstable under further training.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.