← ML Research Wiki / 2303.15056

CHATGPT OUTPERFORMS CROWD-WORKERS FOR TEXT-ANNOTATION TASKS

Fabrizio Gilardi, Meysam Alizadeh, Maël Kubli, University of Zurich Zurich Switzerland, University of Zurich Zurich Switzerland, University of Zurich Zurich Switzerland (2023)

Paper Information
arXiv ID
Venue
Proceedings of the National Academy of Sciences of the United States of America
Domain
Computer Language Models
SOTA Claim
Yes
Reproducibility
8/10

Abstract

Many NLP applications require manual text annotations for a variety of tasks, notably to train classifiers or evaluate the performance of unsupervised models. Depending on the size and degree of complexity, the tasks may be conducted by crowd-workers on platforms such as MTurk as well as trained annotators, such as research assistants. Using four samples of tweets and news articles (n = 6,183), we show that ChatGPT outperforms crowd-workers for several annotation tasks, including relevance, stance, topics, and frame detection. Across the four datasets, the zero-shot accuracy of ChatGPT exceeds that of crowd-workers by about 25 percentage points on average, while ChatGPT's intercoder agreement exceeds that of both crowd-workers and trained annotators for all tasks. Moreover, the per-annotation cost of ChatGPT is less than $0.003-about thirty times cheaper than MTurk. These results demonstrate the potential of large language models to drastically increase the efficiency of text classification. * Corresponding author (https://fabriziogilardi.org/).

Summary

This paper presents a systematic evaluation of the performance of ChatGPT compared to crowd workers for text annotation tasks. Using four datasets consisting of tweets and news articles (n = 6,183), the authors demonstrate that ChatGPT outperforms crowd workers in terms of zero-shot accuracy and intercoder agreement across several tasks including relevance, stance detection, topics, and frame detection. ChatGPT's accuracy surpasses that of crowd workers by an average of 25 percentage points while being significantly cheaper. The analysis highlights the potential for large language models to enhance text annotation processes and suggests avenues for further research, particularly regarding performance across languages and the implementation of few-shot learning.

Methods

This paper employs the following methods:

  • Zero-shot classification
  • Intercoder agreement evaluation

Models Used

  • ChatGPT

Datasets

The following datasets were used in this research:

  • None specified

Evaluation Metrics

  • Accuracy
  • Intercoder agreement

Results

  • ChatGPT's zero-shot accuracy exceeds that of crowd workers by about 25 percentage points on average
  • ChatGPT demonstrates higher intercoder agreement compared to both crowd workers and trained annotators
  • ChatGPT's per-annotation cost is about $0.003, approximately thirty times cheaper than MTurk

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: None specified
  • GPU Type: None specified

Keywords

Large language models ChatGPT Text annotation Crowd-workers Reproducibility NLP tasks

Papers Using Similar Methods

External Resources