ArXiv TLDR

How Can We Synthesize High-Quality Pretraining Data? A Systematic Study of Prompt Design, Generator Model, and Source Data

🐦 Tweet
2604.13977

Joel Niklaus, Atsuki Yamaguchi, Michal Štefánik, Guilherme Penedo, Hynek Kydlíček + 7 more

cs.CLcs.AIcs.LG

TLDR

A study on synthetic data for LLMs reveals structured formats and source data are crucial, while large generators aren't, leading to the efficient FinePhrase dataset.

Key contributions

  • Structured formats (tables, FAQs, tutorials) consistently yield higher quality synthetic data.
  • Generator models larger than 1B parameters provide no further quality improvement.
  • The selection of original source data significantly influences synthetic data performance.
  • Introduces FinePhrase, a 486B-token dataset, outperforming baselines and cutting costs by 30x.

Why it matters

This paper provides a systematic guide to creating high-quality synthetic pretraining data, crucial for LLMs. Its findings on prompt design and generator models can drastically improve data generation efficiency and quality, while the open-sourced FinePhrase dataset accelerates research.

Original Abstract

Synthetic data is a standard component in training large language models, yet systematic comparisons across design dimensions, including rephrasing strategy, generator model, and source data, remain absent. We conduct extensive controlled experiments, generating over one trillion tokens, to identify critical factors in rephrasing web text into synthetic pretraining data. Our results reveal that structured output formats, such as tables, math problems, FAQs, and tutorials, consistently outperform both curated web baselines and prior synthetic methods. Notably, increasing the size of the generator model beyond 1B parameters provides no additional benefit. Our analysis also demonstrates that the selection of the original data used for mixing substantially influences performance. By applying our findings, we develop \textbf{\textsc{FinePhrase}}, a 486-billion-token open dataset of rephrased web text. We show that \textsc{FinePhrase} outperforms all existing synthetic data baselines while reducing generation costs by up to 30 times. We provide the dataset, all prompts, and the generation framework to the research community.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.