Code Llama: Open Foundation Models for Code
Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, Itai Gat + 21 more
TLDR
Code Llama is a new family of open-source large language models specialized for coding tasks, achieving state-of-the-art results on multiple benchmarks with support for long contexts and code infilling.
Key contributions
- Introduces Code Llama models in multiple sizes (7B to 70B parameters) and variants specialized for Python and instruction following.
- Supports long input contexts up to 100k tokens and code infilling capabilities for enhanced code generation.
- Achieves leading performance on key coding benchmarks like HumanEval, MBPP, and MultiPL-E, surpassing larger models such as Llama 2 70B.
Why it matters
This paper matters because it provides the research and developer community with powerful, open-access code generation models that rival or exceed proprietary alternatives, enabling broader innovation and application in programming assistance, code completion, and software development workflows.
Original Abstract
We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks. We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama - Python), and instruction-following models (Code Llama - Instruct) with 7B, 13B, 34B and 70B parameters each. All models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens. 7B, 13B and 70B Code Llama and Code Llama - Instruct variants support infilling based on surrounding content. Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively. Notably, Code Llama - Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E. We release Code Llama under a permissive license that allows for both research and commercial use.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.