ArXiv TLDR

StarCoder: may the source be with you!

🐦 Tweet
2305.06161

Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov + 62 more

cs.CLcs.AIcs.PLcs.SE

TLDR

StarCoder is a 15.5B parameter open-source code generation model trained on a trillion tokens that outperforms existing open Code LLMs across multiple languages and offers advanced safety and usability features.

Key contributions

  • Developed StarCoderBase and StarCoder models with 15.5B parameters and 8K context length, supporting infilling and fast inference.
  • Trained on a massive dataset of 1 trillion tokens from permissively licensed GitHub repos, with fine-tuning on 35B Python tokens.
  • Achieved state-of-the-art performance among open Code LLMs, surpassing OpenAI’s code-cushman-001 and excelling on Python benchmarks like HumanEval.
  • Implemented enhanced safety measures including PII redaction and attribution tracing to enable responsible open-access release.
  • Released models publicly under a commercially viable Open Responsible AI Model license.

Why it matters

This paper presents a major advancement in open-source large language models for code by delivering a highly capable, multi-language model trained on an unprecedented scale of permissively licensed data. By combining strong performance with rigorous safety protocols and an accessible license, StarCoder significantly lowers barriers for research and commercial use of code generation models, fostering innovation and responsible AI development in programming assistance.

Original Abstract

The BigCode community, an open-scientific collaboration working on the responsible development of Large Language Models for Code (Code LLMs), introduces StarCoder and StarCoderBase: 15.5B parameter models with 8K context length, infilling capabilities and fast large-batch inference enabled by multi-query attention. StarCoderBase is trained on 1 trillion tokens sourced from The Stack, a large collection of permissively licensed GitHub repositories with inspection tools and an opt-out process. We fine-tuned StarCoderBase on 35B Python tokens, resulting in the creation of StarCoder. We perform the most comprehensive evaluation of Code LLMs to date and show that StarCoderBase outperforms every open Code LLM that supports multiple programming languages and matches or outperforms the OpenAI code-cushman-001 model. Furthermore, StarCoder outperforms every model that is fine-tuned on Python, can be prompted to achieve 40\% pass@1 on HumanEval, and still retains its performance on other programming languages. We take several important steps towards a safe open-access model release, including an improved PII redaction pipeline and a novel attribution tracing tool, and make the StarCoder models publicly available under a more commercially viable version of the Open Responsible AI Model license.

📬 Weekly AI Paper Digest

Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.