← ML Research Wiki / 2308.12950

Code Llama: Open Foundation Models for Code

Baptiste Rozière, Jonas Gehring, Fabian Gloeckle, Sten Sootla, XiaoqingItai Gat, YossiEllen Tan, Jingyu Liu, Romain Sauvestre, Tal Remez, Jérémy Rapin, Artyom Kozhevnikov, Ivan Evtimov, Joanna Bitton, Manish Bhatt, Cristian Canton Ferrer, Aaron Grattafiori, Wenhan Xiong, Alexandre Défossez, Jade Copet, Faisal Azhar, Hugo Touvron, Louis Martin, Nicolas Usunier, Thomas Scialom, Gabriel Synnaeve, Meta Ai, Cᴏᴅᴇ Lʟᴀᴍᴀ -Iɴsᴛʀᴜᴄᴛ, Cᴏᴅᴇ Lʟᴀᴍᴀ, Cᴏᴅᴇ Lʟᴀᴍᴀ -Pʏᴛʜᴏɴ (2023)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
natural language processing, artificial intelligence
SOTA Claim
Yes
Code
Reproducibility
8/10

Abstract

We release Code Llama, a family of large language models for code based on Llama 2 providing state-of-the-art performance among open models, infilling capabilities, support for large input contexts, and zero-shot instruction following ability for programming tasks.We provide multiple flavors to cover a wide range of applications: foundation models (Code Llama), Python specializations (Code Llama -Python), and instruction-following models (Code Llama -Instruct) with 7B, 13B, 34B, and 70B parameters each.These models are trained on sequences of 16k tokens and show improvements on inputs with up to 100k tokens.The 7B, 13B and 70B Code Llama and Code Llama -Instruct variants support infilling based on surrounding content.Code Llama reaches state-of-the-art performance among open models on several code benchmarks, with scores of up to 67% and 65% on HumanEval and MBPP, respectively.Notably, Code Llama -Python 7B outperforms Llama 2 70B on HumanEval and MBPP, and all our models outperform every other publicly available model on MultiPL-E.We release Code Llama under a permissive license that allows for both research and commercial use. 1

Summary

Code Llama introduces an advanced family of large language models designed specifically for code generation and programming tasks, built on Llama 2. The models, which include variants like Code Llama, Code Llama -Python, and Code Llama -Instruct, range from 7B to 70B parameters, and excel in zero-shot instruction following and infilling capabilities, alongside enhanced context support allowing input sequences of up to 100k tokens. The models demonstrate state-of-the-art performance on coding benchmarks, achieving up to 67% on HumanEval and 65% on MBPP, outperforming other existing models. Training strategies include code specialization from a foundation pretrained model, with multitask objectives allowing for both autoregressive and causal infilling predictions. Additionally, the models underwent instruction fine-tuning for improved safety and helpfulness, producing enhanced results on safety-related benchmarks such as TruthfulQA, ToxiGen, and BOLD. The paper emphasizes the need for specialized models to optimize performance, particularly in complex coding environments, while also providing guidelines for responsible usage given potential risks associated with code generation.

Methods

This paper employs the following methods:

  • autoregressive training
  • fine-tuning
  • zero-shot learning
  • instruction tuning

Models Used

  • Code Llama
  • Code Llama -Python
  • Code Llama -Instruct

Datasets

The following datasets were used in this research:

  • HumanEval
  • MBPP
  • MultiPL-E
  • APPS
  • GSM8K

Evaluation Metrics

  • pass@1
  • pass@10
  • pass@100
  • truthfulness
  • toxicity
  • sentiment scores

Results

  • State-of-the-art performance on HumanEval and MBPP with scores of 67% and 65% respectively
  • Code Llama -Python outperformed Llama 2 in various benchmarks
  • Significant improvements in truthfulness and toxicity metrics after instruction fine-tuning

Limitations

The authors identified the following limitations:

  • Performance decrease on standard benchmarks due to long context fine-tuning
  • Potential ethical risks associated with code generation and safety issues

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: NVIDIA A100 80GB

Keywords

Large language models Code Llama Open source models Code infilling Long context understanding Instruction fine-tuning

Papers Using Similar Methods

External Resources