← ML Research Wiki / 2304.01373

Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

Stella Biderman Booz Allen Hamilton McLeanUSA, Hailey Schoelkopf Yale University New HavenUSA, Quentin Anthony, Herbie Bradley University of Cambridge UK, Kyle O'brien, Eric Hallahan, Mohammad Aflah Khan Indraprastha Institute of Information Technology Delhi India 6 Stability AI 7 Datasaur.aiUSA, Shivanshu Purohit, Sai Usvsn, Prashanth, Edward Raff Booz Allen Hamilton McLeanUSA, Aviya Skowron, Lintang Sutawika, Oskar Van Der Wal Institute for Logic, Language and Computation University of Amsterdam Nether-lands, Hailey Schoelkopf (2023)

Paper Information
arXiv ID
Venue
International Conference on Machine Learning
Domain
natural language processing
Code
Reproducibility
8/10

Abstract

How do large language models (LLMs) develop and evolve over the course of training?How do these patterns change as models scale?To answer these questions, we introduce Pythia, a suite of 16 LLMs all trained on public data seen in the exact same order and ranging in size from 70M to 12B parameters.We provide public access to 154 checkpoints for each one of the 16 models, alongside tools to download and reconstruct their exact training dataloaders for further study.We intend Pythia to facilitate research in many areas, and we present several case studies including novel results in memorization, term frequency effects on few-shot performance, and reducing gender bias.We demonstrate that this highly controlled setup can be used to yield novel insights toward LLMs and their training dynamics.Trained models, analysis code, training code, and training data can be found at https: //github.com/EleutherAI/pythia.

Summary

The paper introduces Pythia, a suite comprising 16 large language models (LLMs) ranging from 70M to 12B parameters, all trained using the same public data order. Pythia aims to facilitate research in various NLP domains by providing access to checkpoints and training data for analysis. The authors discuss key insights from Pythia, including findings on memorization effects, the impact of term frequency on performance, and methods for reducing gender bias in models. The study emphasizes the organized model architecture and controlled training conditions that enable detailed experimentation on LLMs, presenting several case studies to illustrate how adjustments in training data influence model behavior.

Methods

This paper employs the following methods:

  • Transformer

Models Used

  • Pythia-70M
  • Pythia-160M
  • Pythia-410M
  • Pythia-1.0B
  • Pythia-1.4B
  • Pythia-2.8B
  • Pythia-6.9B
  • Pythia-12B

Datasets

The following datasets were used in this research:

  • Pile

Evaluation Metrics

  • Accuracy
  • Stereotype accuracy

Results

  • Novel insights into memorization and bias reduction in LLMs
  • Observed correlation between term frequencies and task performance in larger models

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: A100

Keywords

large language models training analysis scaling laws bias mitigation memorization

Papers Using Similar Methods

External Resources