Adam's Law: Textual Frequency Law on Large Language Models
Hongyuan Adam Lu, Z. L., Victor Wei, Zefan Zhang, Zhao Hong + 3 more
TLDR
Adam's Law proposes that LLMs benefit from frequent textual data, introducing a framework for frequency-aware prompting and fine-tuning.
Key contributions
- Proposes Textual Frequency Law (TFL): LLMs prefer frequent textual data for prompting and fine-tuning.
- Estimates sentence-level frequency using online resources and an input paraphraser.
- Introduces Textual Frequency Distillation (TFD) for adjusting frequency estimation via LLM story completion.
- Develops Curriculum Textual Frequency Training (CTFT) for fine-tuning LLMs by increasing frequency.
Why it matters
This research highlights an understudied aspect of LLM performance: the impact of textual frequency. By demonstrating that LLMs benefit from more frequent data, it offers practical strategies for improving model efficiency and output quality through frequency-aware prompting and fine-tuning.
Original Abstract
While textual frequency has been validated as relevant to human cognition in reading speed, its relatedness to Large Language Models (LLMs) is seldom studied. We propose a novel research direction in terms of textual data frequency, which is an understudied topic, to the best of our knowledge. Our framework is composed of three units. First, this paper proposes Textual Frequency Law (TFL), which indicates that frequent textual data should be preferred for LLMs for both prompting and fine-tuning. Since many LLMs are closed-source in their training data, we propose using online resources to estimate the sentence-level frequency. We then utilize an input paraphraser to paraphrase the input into a more frequent textual expression. Next, we propose Textual Frequency Distillation (TFD) by querying LLMs to conduct story completion by further extending the sentences in the datasets, and the resulting corpora are used to adjust the initial estimation. Finally, we propose Curriculum Textual Frequency Training (CTFT) that fine-tunes LLMs in an increasing order of sentence-level frequency. Experiments are conducted on our curated dataset Textual Frequency Paired Dataset (TFPD) on math reasoning, machine translation, commonsense reasoning and agentic tool calling. Results show the effectiveness of our framework.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.