OctoPack: Instruction Tuning Code Large Language Models
Niklas Muennighoff, Qian Liu, Armel Zebaze, Qinkai Zheng, Binyuan Hui + 5 more
TLDR
OctoPack introduces instruction tuning for code LLMs using a massive dataset of Git commits, achieving state-of-the-art results on multi-language coding benchmarks without relying on OpenAI data.
Key contributions
- Compiled CommitPack, a 4TB dataset of Git commits spanning 350 programming languages, pairing code changes with human instructions for instruction tuning.
- Demonstrated superior performance on the HumanEval Python benchmark with a 16B parameter StarCoder model, achieving 46.2% pass@1 without using OpenAI outputs.
- Created HumanEvalPack, a multi-task, multi-language benchmark covering code repair, explanation, and synthesis across six languages, where OctoCoder and OctoGeeX lead permissive model performance.
Why it matters
This paper matters because it leverages the natural structure of real-world code commits to instruction-tune large language models, significantly improving their ability to understand and generate code across many languages and tasks. By providing large-scale, diverse training data and comprehensive benchmarks, it advances open-source code LLM capabilities beyond prior models reliant on proprietary data, fostering broader accessibility and progress in code generation research.
Original Abstract
Finetuning large language models (LLMs) on instructions leads to vast performance improvements on natural language tasks. We apply instruction tuning using code, leveraging the natural structure of Git commits, which pair code changes with human instructions. We compile CommitPack: 4 terabytes of Git commits across 350 programming languages. We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46.2% pass@1). We further introduce HumanEvalPack, expanding the HumanEval benchmark to a total of 3 coding tasks (Code Repair, Code Explanation, Code Synthesis) across 6 languages (Python, JavaScript, Java, Go, C++, Rust). Our models, OctoCoder and OctoGeeX, achieve the best performance across HumanEvalPack among all permissive models, demonstrating CommitPack's benefits in generalizing to a wider set of languages and natural coding tasks. Code, models and data are freely available at https://github.com/bigcode-project/octopack.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.