ArXiv TLDR

GenomeQA: Benchmarking General Large Language Models for Genome Sequence Understanding

🐦 Tweet
2604.05774

Weicai Long, Yusen Hou, Junning Feng, Houcheng Su, Shuo Yang + 2 more

q-bio.GNcs.CL

TLDR

GenomeQA is a new benchmark evaluating general LLMs on raw genome sequence understanding, revealing their ability to use local signals but struggle with complex inference.

Key contributions

  • Introduces GenomeQA, a benchmark for general LLMs on raw genome sequence inference.
  • Comprises 5,200 samples across six diverse tasks (e.g., enhancer ID, taxonomic classification).
  • Reveals LLMs exploit local sequence signals like GC content and short motifs.
  • Highlights LLMs' performance degradation on tasks needing multi-step or indirect inference.

Why it matters

This paper introduces GenomeQA, a crucial benchmark for evaluating general LLMs on raw genomic sequences. It fills a significant gap by assessing direct sequence understanding, not just text-based knowledge. The findings offer key insights into LLM capabilities and limitations, guiding future improvements for genomic applications.

Original Abstract

Large Language Models (LLMs) are increasingly adopted as conversational assistants in genomics, where they are mainly used to reason over biological knowledge, annotations, and analysis outputs through natural language interfaces. However, existing benchmarks either focus on specialized DNA models trained for sequence prediction or evaluate biological knowledge using text-only questions, leaving the behavior of general-purpose LLMs when directly exposed to raw genome sequences underexplored. We introduce GenomeQA, a benchmark designed to provide a controlled evaluation setting for general-purpose LLMs on sequence-based genome inference tasks. GenomeQA comprises 5,200 samples drawn from multiple biological databases, with sequence lengths ranging from 6 to 1,000 base pairs (bp), spanning six task families: Enhancer and Promoter Identification, Splice Site Identification, Taxonomic Classification, Histone Mark Prediction, Transcription Factor Binding Site Prediction, and TF Motif Prediction. Across six frontier LLMs, we find that models consistently outperform random baselines and can exploit local sequence signals such as GC content and short motifs, while performance degrades on tasks that require more indirect or multi-step inference over sequence patterns. GenomeQA establishes a diagnostic benchmark for studying and improving the use of general-purpose LLMs on raw genomic sequences.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.