← ML Research Wiki / 2401.01335

Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models

Zixiang Chen Department of Computer Science Univer-sity of California 90095Los AngelesCAUSA, Yihe Deng Department of Computer Science Univer-sity of California 90095Los AngelesCAUSA, Huizhuo Yuan Department of Computer Science Univer-sity of California 90095Los AngelesCAUSA, Kaixuan Ji Department of Computer Science Univer-sity of California 90095Los AngelesCAUSA, Quanquan Gu Department of Computer Science Univer-sity of California 90095Los AngelesCAUSA Department of Computer Science Univer-sity of California 90095Los AngelesCAUSA (2024)

Paper Information
arXiv ID
Venue
International Conference on Machine Learning
Domain
Natural Language Processing
SOTA Claim
Yes
Code
Reproducibility
8/10

Abstract

Harnessing the power of human-annotated data through Supervised Fine-Tuning (SFT) is pivotal for advancing Large Language Models (LLMs).In this paper, we delve into the prospect of growing a strong LLM out of a weak one without the need for acquiring additional human-annotated data.We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN), which starts from a supervised fine-tuned model.At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself.More specifically, the LLM generates its own training data from its previous iterations, refining its policy by discerning these self-generated responses from those obtained from human-annotated data.Our method progressively elevates the LLM from a nascent model to a formidable one, unlocking the full potential of human-annotated demonstration data for SFT.Theoretically, we prove that the global optimum to the training objective function of our method is achieved only when the LLM policy aligns with the target data distribution.Empirically, we evaluate our method on several benchmark datasets including the HuggingFace Open LLM Leaderboard, MT-Bench, and datasets from Big-Bench.Our results show that SPIN can significantly improve the LLM's performance across a variety of benchmarks and even outperform models trained through direct preference optimization (DPO) supplemented with extra GPT-4 preference data.This sheds light on the promise of self-play, enabling the achievement of humanlevel performance in LLMs without the need for expert opponents.Codes are available at https://github.com/uclaml/SPIN.

Summary

This paper presents a novel fine-tuning method called Self-Play Fine-Tuning (SPIN), which converts weak Large Language Models (LLMs) into strong ones by utilizing self-play techniques without needing additional human-annotated data. SPIN begins from a supervised fine-tuned model and employs a self-play mechanism, where the LLM generates its own training data by juxtaposing its responses with those of human annotations. The method proves effective in enhancing LLM performance across several benchmarks, demonstrating improvements on the HuggingFace Open LLM Leaderboard and other evaluation frameworks like MT-Bench and Big-Bench. The experiments showed that SPIN's iterative process successfully boosts model capabilities, surpassing techniques that require external human data, such as direct preference optimization (DPO).

Methods

This paper employs the following methods:

  • Self-Play Fine-Tuning (SPIN)

Models Used

  • zephyr-7b-sft-full
  • Mistral-7B

Datasets

The following datasets were used in this research:

  • HuggingFace Open LLM Leaderboard
  • MT-Bench
  • Big-Bench
  • Ultrachat200k

Evaluation Metrics

  • Accuracy

Results

  • SPIN improves LLM performance significantly across various benchmarks
  • SPIN achieves a score improvement from 58.14 to 63.16 on the HuggingFace Open LLM Leaderboard
  • SPIN shows 10%+ improvement in scores on GSM8k and TruthfulQA benchmarks
  • SPIN yields performance comparable to models trained on additional human data

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: 8
  • GPU Type: NVIDIA A100 80GB

Keywords

Self-Play Fine-Tuning Language Models Synthetic Data Reinforcement Learning Self-Play Mechanism

Papers Using Similar Methods

External Resources