← ML Research Wiki / 2307.09288

L : Open Foundation and Fine-Tuned Chat Models

Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, Dan Bikel, Lukas Blecher, Cristian Canton Ferrer, Moya Chen, Guillem Cucurull, David Esiobu, Jude Fernandes, Jeremy Fu, Wenyin Fu, Brian Fuller, Cynthia Gao, Vedanuj Goswami, Naman Goyal, Anthony Hartshorn, Saghar Hosseini, Rui Hou, Hakan Inan, Marcin Kardas, Viktor Kerkez, Madian Khabsa, Isabel Kloumann, PunitArtem Korenev, Singh Koura, Marie-Anne Lachaux, Thibaut Lavril, Jenya Lee, Diana Liskovich, Yinghai Lu, Yuning Mao, Xavier Martinet, Todor Mihaylov, Pushkar Mishra, Igor Molybog, Yixin Nie, Andrew Poulton, Jeremy Reizenstein, Rashi Rungta, Kalyan Saladi, Alan Schelten, Ruan Silva, Eric Michael, Smith Ranjan, Subramanian Xiaoqing, Ellen Tan, Binh Tang, Ross Taylor, Adina Williams, Jian Xiang Kuan, Puxin Xu, Zheng Yan, Iliyan Zarov, Yuchen Zhang, Angela Fan, Melanie Kambadur, Sharan Narang, Aurelien Rodriguez, Robert Stojnic, Sergey Edunov, Thomas Scialom, Meta Genai (2023)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
Artificial Intelligence, Natural Language Processing
SOTA Claim
Yes
Code
Reproducibility
8/10

Abstract

In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called L -C , are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closedsource models. We provide a detailed description of our approach to fine-tuning and safety improvements of L -C in order to enable the community to build on our work and contribute to the responsible development of LLMs. * Equal contribution, corresponding authors: {tscialom, htouvron}@meta.com † Second author Contributions for all the authors can be found in Section A.1. arXiv:2307.09288v2 [cs.CL] 19 Jul 2023 -C compared to other open-source and closed-source models. Human raters compared model generations on~4k prompts consisting of both single and multi-turn prompts. The 95% confidence intervals for this evaluation are between 1% and 2%. More details in Section 3.4.2. While reviewing these results, it is important to note that human evaluations can be noisy due to limitations of the prompt set, subjectivity of the review guidelines, subjectivity of individual raters, and the inherent difficulty of comparing generations. Figure 2: Win-rate % for helpfulness and safety between commercial-licensed baselines and L -C , according to GPT-4.To complement the human evaluation, we used a more capable model, not subject to our own guidance. Green area indicates our model is better according to GPT-4. To remove ties, we used win/(win + loss). The orders in which the model responses are presented to GPT-4 are randomly swapped to alleviate bias. * *

Summary

In this work, the authors introduce and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) comprising model scales from 7 billion to 70 billion parameters. The fine-tuned versions, termed L -C, are optimized for dialogue use cases and reportedly surpass other open-source chat models in many benchmarks tested. The paper details a thorough methodology for fine-tuning, enhancing safety, and training LLMs, emphasizing the importance of community contribution for responsible AI development. Through detailed human evaluations and comparisons with closed-source models, the models demonstrate notable performance improvements in helpfulness and safety metrics. The models are released publicly to promote further research and safer application of LLMs. Challenges such as noise in human evaluations and bias in pretraining data are discussed, along with a commitment to ongoing improvements and community engagement in model development.

Methods

This paper employs the following methods:

  • Reinforcement Learning with Human Feedback (RLHF)
  • Supervised Fine-Tuning (SFT)

Models Used

  • Llama 2
  • L -C

Datasets

The following datasets were used in this research:

  • SQuAD
  • NaturalQuestions
  • TriviaQA
  • HumanEval
  • MBPP

Evaluation Metrics

  • Truthfulness
  • Toxicity
  • Helpfulness
  • Safety

Results

  • Outperform open-source models on dialogue benchmarks
  • Achieve comparable performance to closed-source models
  • Significantly improved safety and helpfulness ratings

Limitations

The authors identified the following limitations:

  • High computational costs for fine-tuning
  • Subjectivity and noise in human evaluations
  • Potential bias in training data

Technical Requirements

  • Number of GPUs: 2000
  • GPU Type: NVIDIA A100-80GB

Keywords

LLMs Safety Fine-tuning Reinforcement Learning with Human Feedback Dialogue Models

Papers Using Similar Methods

External Resources