← ML Research Wiki / 2303.11366

Reflexion: Language Agents with Verbal Reinforcement Learning

Noah Shinn [email protected] Northeastern University, Federico Cassano [email protected] Northeastern University, Edward Berman [email protected] Northeastern University, Ashwin Gopinath Massachusetts Institute of Technology, Karthik Narasimhan [email protected] Princeton University, Shunyu Yao [email protected] Princeton University (2023)

Paper Information
arXiv ID
Venue
Neural Information Processing Systems
Domain
artificial intelligence, machine learning
SOTA Claim
Yes
Code
Reproducibility
8/10

Abstract

Large language models (LLMs) have been increasingly used to interact with external environments (e.g., games, compilers, APIs) as goal-driven agents.However, it remains challenging for these language agents to quickly and efficiently learn from trial-and-error as traditional reinforcement learning methods require extensive training samples and expensive model fine-tuning.We propose Reflexion, a novel framework to reinforce language agents not by updating weights, but instead through linguistic feedback.Concretely, Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials.Reflexion is flexible enough to incorporate various types (scalar values or free-form language) and sources (external or internally simulated) of feedback signals, and obtains significant improvements over a baseline agent across diverse tasks (sequential decision-making, coding, language reasoning).For example, Reflexion achieves a 91% pass@1 accuracy on the HumanEval coding benchmark, surpassing the previous state-of-the-art GPT-4 that achieves 80%.We also conduct ablation and analysis studies using different feedback signals, feedback incorporation methods, and agent types, and provide insights into how they affect performance.We release all code, demos, and datasets at https://github.com/noahshinn024/reflexion.Preprint.Under review.

Summary

Reflexion is proposed as a novel framework to enhance language agents using verbal reinforcement learning. Unlike traditional reinforcement learning that adjusts model weights, Reflexion agents verbally reflect on feedback signals and store this reflective text in an episodic memory buffer. This process helps them improve decision-making across various tasks such as sequential decision-making, coding, and language reasoning. The framework supports different types and sources of feedback, resulting in significant performance improvements. For instance, Reflexion achieves a 91% pass@1 accuracy on the HumanEval coding benchmark, outperforming GPT-4. Studies are conducted to analyze various feedback signals and their incorporation methods, yielding insights into performance enhancement. The paper introduces the LeetcodeHardGym benchmark for coding tasks. Overall, Reflexion demonstrates the effectiveness of self-reflection in learning complex tasks with fewer trials, although it has limitations regarding credit assignment and reliance on LLM capabilities.

Methods

This paper employs the following methods:

  • Reinforcement Learning
  • Self-Reflection

Models Used

  • GPT-4
  • Chain of Thought
  • ReAct

Datasets

The following datasets were used in this research:

  • HumanEval
  • AlfWorld
  • HotPotQA
  • LeetcodeHardGym

Evaluation Metrics

  • pass@1
  • Accuracy
  • F1-score

Results

  • 91% pass@1 accuracy on HumanEval
  • 22% improvement in AlfWorld tasks
  • 20% improvement on HotPotQA
  • 11% improvement in Python programming on HumanEval

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

language agents verbal reinforcement learning self-reflection large language models adaptive agents

Papers Using Similar Methods

External Resources