← ML Research Wiki / 2303.16634

G-EVAL: NLG Evaluation using GPT-4 with Better Human Alignment

Yang Liu Microsoft Azure AI, Dan Iter Microsoft Azure AI, Yichong Xu Microsoft Azure AI, Shuohang Wang Microsoft Azure AI, Ruochen Xu Microsoft Azure AI, Chenguang Zhu Microsoft Azure AI (2023)

Paper Information
arXiv ID
Venue
Conference on Empirical Methods in Natural Language Processing
Domain
Not specified
SOTA Claim
Yes
Reproducibility
4/10

Abstract

The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically.Conventional referencebased metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity.Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references.However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators.In this work, we present G-EVAL, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs.We experiment with two generation tasks, text summarization and dialogue generation.We show that G-EVAL with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin.We also propose analysis on the behavior of LLM-based evaluators, and highlight the potential concern of LLM-based evaluators having a bias towards the LLM-generated texts. 1

Summary

The paper presents G-EVAL, a framework for evaluating natural language generation (NLG) systems utilizing GPT-4 with a chain-of-thoughts (CoT) approach. Traditional metrics like BLEU and ROUGE are criticized for their low correlation with human judgments. G-EVAL aims to provide a more reliable evaluation by employing a form-filling paradigm and generating detailed evaluation steps using LLMs. Experiments are conducted on text summarization and dialogue generation tasks, showing that G-EVAL achieves a Spearman correlation of 0.514 with human evaluations in summarization, outperforming previous methods significantly. The study also analyzes potential biases of LLM-based evaluators and suggests using automatic CoT to enhance evaluation context.

Methods

This paper employs the following methods:

  • Chain-of-Thoughts
  • Form-Filling Paradigm

Models Used

  • GPT-4
  • GPT-3.5

Datasets

The following datasets were used in this research:

  • SummEval
  • Topical-Chat
  • QAGS

Evaluation Metrics

  • Spearman
  • Kendall-Tau
  • Coherence (1-5)
  • Engagingness (1-3)

Results

  • G-EVAL outperforms reference-based and reference-free baseline metrics; achieves a Spearman correlation of 0.514 with human evaluations in summarization.
  • G-EVAL can differentiate between LLM-generated and human-written texts effectively.

Limitations

The authors identified the following limitations:

  • Potential bias of LLM evaluators towards LLM-generated texts.
  • Dependency on the availability of LLM resources.
  • Need for adaptability to new NLG tasks with flexible criteria.

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Papers Using Similar Methods

External Resources