EvoLen: Evolution-Guided Tokenization for DNA Language Model
Nan Huang, Xiaoxiao Zhou, Junxia Cui, Mario Tapia-Pacheco, Tiffany Amariuta + 2 more
TLDR
EvoLen introduces an evolution-guided tokenization method for DNA language models, improving the preservation of functional sequence patterns and DNALM performance.
Key contributions
- Incorporates evolutionary signals to group DNA sequences for tokenization.
- Trains separate BPE tokenizers, then merges vocabularies prioritizing preserved patterns.
- Applies length-aware decoding with dynamic programming for better motif preservation.
- Outperforms standard BPE on DNALM benchmarks and yields more interpretable representations.
Why it matters
DNA tokenization is a fundamental yet underexplored aspect of DNALMs. This work demonstrates that incorporating evolutionary information into tokenization yields more biologically meaningful and interpretable sequence representations, improving DNALM performance. It highlights tokenization as a critical inductive bias.
Original Abstract
Tokens serve as the basic units of representation in DNA language models (DNALMs), yet their design remains underexplored. Unlike natural language, DNA lacks inherent token boundaries or predefined compositional rules, making tokenization a fundamental modeling decision rather than a naturally specified one. While existing approaches like byte-pair encoding (BPE) excel at capturing token structures that reflect human-generated linguistic regularities, DNA is organized by biological function and evolutionary constraint rather than linguistic convention. We argue that DNA tokenization should prioritize functional sequence patterns like regulatory motifs-short, recurring segments under evolutionary constraint and typically preserved across species. We incorporate evolutionary information directly into the tokenization process through EvoLen, a tokenizer that combines evolutionary stratification with length-aware decoding to better preserve motif-scale functional sequence units. EvoLen uses cross-species evolutionary signals to group DNA sequences, trains separate BPE tokenizers on each group, merges the resulting vocabularies via a rule prioritizing preserved patterns, and applies length-aware decoding with dynamic programming. Through controlled experiments, EvoLen improves the preservation of functional sequence patterns, differentiation across genomic contexts, and alignment with evolutionary constraint, while matching or outperforming standard BPE across diverse DNALM benchmarks. These results demonstrate that tokenization introduces a critical inductive bias and that incorporating evolutionary information yields more biologically meaningful and interpretable sequence representations.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.