← ML Research Wiki / 2303.17651

SELF-REFINE: Iterative Refinement with Self-Feedback

Aman Madaan [email protected] Language Technologies Institute Carnegie Mellon University, Niket Tandon [email protected] Allen Institute for Artificial Intelligence, Prakhar Gupta Language Technologies Institute Carnegie Mellon University, Skyler Hallinan University of Washington 4 NVIDIA 5 UC San Diego 6 Google ResearchBrain Team, Luyu Gao Language Technologies Institute Carnegie Mellon University, Sarah Wiegreffe Allen Institute for Artificial Intelligence, Uri Alon Language Technologies Institute Carnegie Mellon University, Nouha Dziri Allen Institute for Artificial Intelligence, Shrimai Prabhumoye, Yiming Yang Language Technologies Institute Carnegie Mellon University, Shashank Gupta Allen Institute for Artificial Intelligence, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck Allen Institute for Artificial Intelligence University of Washington 4 NVIDIA 5 UC San Diego 6 Google ResearchBrain Team, Amir Yazdanbakhsh, Peter Clark Allen Institute for Artificial Intelligence (2023)

Paper Information
arXiv ID
Venue
Neural Information Processing Systems
Domain
Artificial Intelligence
SOTA Claim
Yes
Reproducibility
8/10

Abstract

Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce SELF-REFINE, an approach for improving initial outputs from LLMs through iterative feedback and refinement. The main idea is to generate an initial output using an LLM; then, the same LLM provides feedback for its output and uses it to refine itself, iteratively. SELF-REFINE does not require any supervised training data, additional training, or reinforcement learning, and instead uses a single LLM as the generator, refiner and the feedback provider. We evaluate SELF-REFINE across 7 diverse tasks, ranging from dialog response generation to mathematical reasoning, using state-of-the-art (GPT-3.5 and GPT-4) LLMs. Across all evaluated tasks, outputs generated with SELF-REFINE are preferred by humans and automatic metrics over those generated with the same LLM using conventional one-step generation, improving by ∼20% absolute on average in task performance. Our work demonstrates that even state-of-the-art LLMs like GPT-4 can be further improved at test-time using our simple, standalone approach. 1 .

Summary

The paper introduces SELF-REFINE, a method for iterative refinement of outputs from large language models (LLMs) by enabling self-feedback and self-improvement, similar to human writing processes. This approach eliminates the need for supervised training or external feedback. The method generates an initial output, receives feedback from the same model, and then refines that output based on the feedback, iterating until a stopping condition is met. Evaluations across seven diverse tasks show that SELF-REFINE significantly improves performance over direct LLM outputs, achieving an average performance improvement of approximately 20%. The paper demonstrates the potential of using iterative self-feedback for enhancing the capabilities of state-of-the-art LLMs, including GPT-3.5 and GPT-4.

Methods

This paper employs the following methods:

  • Iterative Refinement
  • Self-Feedback
  • Few-Shot Prompting

Models Used

  • GPT-3.5
  • GPT-4
  • Codex

Datasets

The following datasets were used in this research:

  • Dialog Response Generation
  • Code Optimization
  • Code Readability Improvement
  • Math Reasoning
  • Sentiment Reversal
  • Acronym Generation
  • Constrained Generation

Evaluation Metrics

  • Human-pref
  • GPT-4-pref
  • Task specific metric

Results

  • SELF-REFINE improves LLM outputs by ∼20% on average across tasks.
  • Human and automated metrics preference for SELF-REFINE outputs over conventional one-step generation.

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

Large Language Models Self-Refinement Iterative Feedback GPT-3.5 GPT-4 Code Optimization Math Reasoning Dialogue Generation

Papers Using Similar Methods

External Resources