← ML Research Wiki / 2407.21787

Large Language Monkeys: Scaling Inference Compute with Repeated Sampling

Bradley Brown [email protected] Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind, Jordan Juravsky Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind, Ryan Ehrlich [email protected] Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind, Ronald Clark [email protected] Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind, Quoc V Le § Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind, Christopher Ré Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind, Azalia Mirhoseini [email protected] Department of Computer Science Stanford University ‡ University of Oxford § Google DeepMind (2024)

Paper Information
arXiv ID
Venue
arXiv.org
Domain
Not specified

Abstract

Scaling the amount of compute used to train language models has dramatically improved their capabilities.However, when it comes to inference, we often limit models to making only one attempt at a problem.Here, we explore inference compute as another axis for scaling, using the simple technique of repeatedly sampling candidate solutions from a model.Across multiple tasks and models, we observe that coverage -the fraction of problems that are solved by any generated sample -scales with the number of samples over four orders of magnitude.Interestingly, the relationship between coverage and the number of samples is often log-linear and can be modelled with an exponentiated power law, suggesting the existence of inference-time scaling laws.In domains like coding and formal proofs, where answers can be automatically verified, these increases in coverage directly translate into improved performance.When we apply repeated sampling to SWE-bench Lite, the fraction of issues solved with DeepSeek-Coder-V2-Instruct increases from 15.9% with one sample to 56% with 250 samples, outperforming the single-sample state-of-the-art of 43%.In domains without automatic verifiers, we find that common methods for picking from a sample collection (majority voting and reward models) plateau beyond several hundred samples and fail to fully scale with the sample budget.

Summary

This paper investigates the potential of increasing inference compute for large language models (LLMs) through repeated sampling of candidate solutions. It establishes that expanding the number of solutions sampled leads to significant improvements in coverage, or the fraction of problems that can be solved by the generated samples. The authors demonstrate that performance benefits are particularly pronounced in domains such as coding and formal proofs, where automatic verification tools can validate correct samples. A comparative analysis of several datasets shows that repeated sampling can enhance problem-solving rates, sometimes outperforming state-of-the-art single sample models. Key findings indicate a log-linear relationship between sample size and coverage, suggestive of underlying scaling laws in inference. Additionally, the authors highlight challenges in domains lacking automatic verifiers, where traditional verification methods struggle to maintain performance as sampling increases. The paper emphasizes the importance of precision in sample selection methods, drawing attention to the nuances and limitations inherent in repeated sampling across various tasks and models.

Methods

This paper employs the following methods:

  • Repeated Sampling

Models Used

  • DeepSeek-Coder-V2-Instruct
  • Llama-3-8B-Instruct
  • Llama-3-70B-Instruct
  • Gemma-2B
  • Pythia-70M
  • GPT-4o
  • Claude 3.5 Sonnet

Datasets

The following datasets were used in this research:

  • SWE-bench Lite
  • CodeContests
  • GSM8K
  • MATH
  • MiniF2F-MATH

Evaluation Metrics

  • Coverage
  • Pass@k
  • Success Rate

Results

  • Coverage increases from 15.9% to 56% on SWE-bench Lite with 250 samples.
  • Improved coverage in tasks like CodeContests by over 300x with repeated sampling.

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Papers Using Similar Methods

External Resources