Aman Madaan [email protected] Language Technologies Institute Carnegie Mellon University, Niket Tandon [email protected] Allen Institute for Artificial Intelligence, Prakhar Gupta Language Technologies Institute Carnegie Mellon University, Skyler Hallinan University of Washington 4 NVIDIA 5 UC San Diego 6 Google ResearchBrain Team, Luyu Gao Language Technologies Institute Carnegie Mellon University, Sarah Wiegreffe Allen Institute for Artificial Intelligence, Uri Alon Language Technologies Institute Carnegie Mellon University, Nouha Dziri Allen Institute for Artificial Intelligence, Shrimai Prabhumoye, Yiming Yang Language Technologies Institute Carnegie Mellon University, Shashank Gupta Allen Institute for Artificial Intelligence, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck Allen Institute for Artificial Intelligence University of Washington 4 NVIDIA 5 UC San Diego 6 Google ResearchBrain Team, Amir Yazdanbakhsh, Peter Clark Allen Institute for Artificial Intelligence (2023)
The paper introduces SELF-REFINE, a method for iterative refinement of outputs from large language models (LLMs) by enabling self-feedback and self-improvement, similar to human writing processes. This approach eliminates the need for supervised training or external feedback. The method generates an initial output, receives feedback from the same model, and then refines that output based on the feedback, iterating until a stopping condition is met. Evaluations across seven diverse tasks show that SELF-REFINE significantly improves performance over direct LLM outputs, achieving an average performance improvement of approximately 20%. The paper demonstrates the potential of using iterative self-feedback for enhancing the capabilities of state-of-the-art LLMs, including GPT-3.5 and GPT-4.
This paper employs the following methods:
The following datasets were used in this research:
The authors identified the following limitations: