← ML Research Wiki / 2302.04761

Toolformer: Language Models Can Teach Themselves to Use Tools

Timo Schick, Jane Dwivedi-Yu, Roberto Dessì Universitat Pompeu Fabra, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom, Meta AI Research (2023)

Paper Information
arXiv ID
Venue
Neural Information Processing Systems
Domain
natural language processing
SOTA Claim
Yes
Reproducibility
7/10

Abstract

Language models (LMs) exhibit remarkable abilities to solve new tasks from just a few examples or textual instructions, especially at scale.They also, paradoxically, struggle with basic functionality, such as arithmetic or factual lookup, where much simpler and smaller models excel.In this paper, we show that LMs can teach themselves to use external tools via simple APIs and achieve the best of both worlds.We introduce Toolformer, a model trained to decide which APIs to call, when to call them, what arguments to pass, and how to best incorporate the results into future token prediction.This is done in a self-supervised way, requiring nothing more than a handful of demonstrations for each API.We incorporate a range of tools, including a calculator, a Q&A system, a search engine, a translation system, and a calendar.Toolformer achieves substantially improved zero-shot performance across a variety of downstream tasks, often competitive with much larger models, without sacrificing its core language modeling abilities.

Summary

The paper introduces Toolformer, a novel approach that enables language models (LMs) to self-supervise their use of external tools via APIs to enhance their capabilities without losing core language modeling performance. Toolformer allows LMs to call APIs for various functionalities, such as Q&A systems, calculators, translation, and calendar, improving their performance on downstream tasks while maintaining generality. Experiments demonstrate that Toolformer significantly outperforms standard LMs and even larger models like GPT-3 on zero-shot performance across tasks such as LAMA, math reasoning, and question answering. The method innovatively augments data with API calls based on in-context learning, ensuring that LMs can autonomously choose when and how to utilize these tools effectively.

Methods

This paper employs the following methods:

  • Self-supervised learning
  • API call-based augmentation

Models Used

  • GPT-J
  • GPT-3

Datasets

The following datasets were used in this research:

  • CCNet
  • SQuAD
  • Google-RE
  • T-REx
  • ASDiv
  • SVAMP
  • MAWPS
  • WebGL
  • Natural Questions
  • TriviaQA
  • MLQA
  • TEMPLAMA
  • DATESET

Evaluation Metrics

  • Zero-shot performance
  • Perplexity

Results

  • Outperforms GPT-3 in zero-shot scenarios on various tasks
  • Achieves strong performance on LAMA, math, and QA benchmarks

Limitations

The authors identified the following limitations:

  • Not specified

Technical Requirements

  • Number of GPUs: 8
  • GPU Type: NVIDIA A100 40GB

Keywords

language models tool use self-supervised learning API calls zero-shot learning

Papers Using Similar Methods

External Resources