Textbooks Are All You Need II: phi-1.5 technical report
Yuanzhi Li, Sébastien Bubeck, Ronen Eldan, Allie Del Giorno, Suriya Gunasekar + 1 more
TLDR
phi-1.5 is a 1.3B parameter Transformer model trained on high-quality textbook-style data that achieves reasoning and coding performance comparable to much larger models.
Key contributions
- Introduces phi-1.5, a 1.3B parameter model focused on common sense reasoning and natural language tasks.
- Uses textbook-quality synthetic data generated by larger LLMs instead of traditional web data for training.
- Achieves performance on complex reasoning tasks comparable to models 5x its size, while exhibiting both advanced reasoning abilities and known LLM limitations.
Why it matters
This paper demonstrates that smaller language models can reach competitive reasoning and coding capabilities by training on carefully curated, high-quality synthetic data rather than large-scale web crawls. This approach challenges the notion that bigger models and massive web data are always necessary, opening avenues for more efficient, interpretable, and controllable LLM development. By open-sourcing phi-1.5, the work also provides a valuable resource for further research on improving model reasoning, reducing harmful outputs, and understanding the trade-offs in training data quality.
Original Abstract
We continue the investigation into the power of smaller Transformer-based language models as initiated by \textbf{TinyStories} -- a 10 million parameter model that can produce coherent English -- and the follow-up work on \textbf{phi-1}, a 1.3 billion parameter model with Python coding performance close to the state-of-the-art. The latter work proposed to use existing Large Language Models (LLMs) to generate ``textbook quality" data as a way to enhance the learning process compared to traditional web data. We follow the ``Textbooks Are All You Need" approach, focusing this time on common sense reasoning in natural language, and create a new 1.3 billion parameter model named \textbf{phi-1.5}, with performance on natural language tasks comparable to models 5x larger, and surpassing most non-frontier LLMs on more complex reasoning tasks such as grade-school mathematics and basic coding. More generally, \textbf{phi-1.5} exhibits many of the traits of much larger LLMs, both good -- such as the ability to ``think step by step" or perform some rudimentary in-context learning -- and bad, including hallucinations and the potential for toxic and biased generations -- encouragingly though, we are seeing improvement on that front thanks to the absence of web data. We open-source \textbf{phi-1.5} to promote further research on these urgent topics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.