Evaluating Large Language Models Trained on Code
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto + 53 more
TLDR
Codex, a GPT model fine-tuned on GitHub code, significantly outperforms prior models in generating correct Python programs from docstrings, demonstrating strong code synthesis capabilities.
Key contributions
- Introduced Codex, a GPT-based model fine-tuned on publicly available GitHub code.
- Released HumanEval, a new benchmark dataset for evaluating functional correctness of code generation.
- Showed Codex solves 28.8% of HumanEval problems with a single sample and up to 70.2% with 100 samples, outperforming GPT-3 and GPT-J.
- Identified limitations in handling long operation chains and variable bindings in docstrings.
- Discussed broader implications of code generation models on safety, security, and economics.
Why it matters
This paper matters because it demonstrates that large language models fine-tuned on code can generate functionally correct programs at a scale and accuracy previously unseen, enabling new tools like GitHub Copilot that assist developers. By releasing a standardized benchmark and analyzing model strengths and weaknesses, the work provides a foundation for future research and responsible deployment of AI-driven code generation technologies.
Original Abstract
We introduce Codex, a GPT language model fine-tuned on publicly available code from GitHub, and study its Python code-writing capabilities. A distinct production version of Codex powers GitHub Copilot. On HumanEval, a new evaluation set we release to measure functional correctness for synthesizing programs from docstrings, our model solves 28.8% of the problems, while GPT-3 solves 0% and GPT-J solves 11.4%. Furthermore, we find that repeated sampling from the model is a surprisingly effective strategy for producing working solutions to difficult prompts. Using this method, we solve 70.2% of our problems with 100 samples per problem. Careful investigation of our model reveals its limitations, including difficulty with docstrings describing long chains of operations and with binding operations to variables. Finally, we discuss the potential broader impacts of deploying powerful code generation technologies, covering safety, security, and economics.
📬 Weekly AI Paper Digest
Get the top 10 AI/ML arXiv papers from the week — summarized, scored, and delivered to your inbox every Monday.