Probing 3D Chromatin Structure Awareness in Evo2 DNA Language Model
TLDR
Evo2 DNA language model learns local CTCF grammar but fails to grasp higher-order 3D chromatin organization, suggesting new architectures are needed.
Key contributions
- Probed Evo2-7B on TAD boundaries and convergent CTCF loops using perturbation and generation tests.
- Evo2 failed to distinguish functional perturbations from random controls in 3D chromatin.
- The model could not reliably generate convergent CTCF loops, only partially recovering TAD boundaries.
- Concludes Evo2 learns local CTCF grammar but misses higher-order 3D chromatin organization.
Why it matters
This paper reveals that current large DNA language models like Evo2 lack an understanding of higher-order 3D chromatin structure, a key regulatory layer. It highlights the limitations of context length alone and proposes integrating cell types and 3D contacts in future model architectures.
Original Abstract
DNA language models like Evo2 now fit million-token contexts large enough to cover entire TADs, yet whether they learn 3D chromatin structure, a key regulatory layer acting atop primary sequence, remains untested and questionable, given that Evo2's training data includes prokaryotes lacking this structure. We probed Evo2-7B on TAD boundaries and convergent CTCF loops in 1 Mb windows using two complementary tests: likelihood-based perturbation and sequence generation. Evo2 did not distinguish functional perturbations from matched random controls and failed to reliably generate convergent CTCF loops, recovering TAD boundaries only partially. Together, these results indicate that Evo2 has learned local CTCF grammar but misses higher-order 3D organization, pointing to bidirectional model architectures integrating cell types and 3D contacts, rather than longer contexts, as the path to developing 3D-aware DNA language models.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.