The Mechanistic Invariance Test: Genomic Language Models Fail to Learn Positional Regulatory Logic
TLDR
Genomic language models fail to learn positional gene regulation, instead relying on statistical shortcuts like AT content, despite high performance.
Key contributions
- Introduces Mechanistic Invariance Test (MIT), a 650-sequence benchmark for genomic language models.
- Finds genomic LMs universally fail to learn positional regulatory logic, relying on AT content correlation.
- Billion-parameter gLMs score incorrect regulatory positions higher than correct ones, inverting biology.
- A simple 100-parameter PWM achieves perfect performance, exposing gLMs' misaligned inductive biases.
Why it matters
This paper reveals that despite high performance, current genomic language models fundamentally miss the positional grammar essential for gene regulation. This demands urgent architectural innovation before their deployment in synthetic biology, gene therapy, and clinical variant interpretation.
Original Abstract
Genomic language models (gLMs) have transformed computational biology, achieving state-of-the-art performance across genomic tasks. Yet a fundamental question threatens the foundation of this success: do these models learn the mechanistic principles governing gene regulation, or do they merely exploit statistical shortcuts? We introduce the Mechanistic Invariance Test (MIT), a rigorous 650-sequence benchmark across 8 classes with scrambled controls that enables clean discrimination between compositional sensitivity and genuine positional understanding. We evaluate five gLMs spanning all major architectural paradigms (autoregressive, masked, and bidirectional state-space models) and uncover a universal failure mode. Through systematic mechanistic probing via AT titration, positional ablation, spacing perturbation, and strand orientation tests, we demonstrate that apparent compensation sensitivity is driven entirely by AT content correlation (r=0.78-0.96 across architectures), not positional regulatory logic. The failures are striking: Evo2-1B and Caduceus score regulatory elements at incorrect positions higher than correct positions, inverting biological reality. All models are strand-blind. Compositional effects dominate positional effects by 46-fold. Perhaps most revealing, a simple 100-parameter position-aware PWM achieves perfect performance (CSS=1.00, SCR=0.98), exposing that billion-parameter gLMs fail not from insufficient capacity but from fundamentally misaligned inductive biases. Larger models show stronger compositional bias, demonstrating that scale amplifies rather than corrects this limitation. These findings reveal that current gLMs capture surface statistics while missing the positional grammar essential for gene regulation, demanding architectural innovation before deployment in synthetic biology, gene therapy, and clinical variant interpretation.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.