← ML Research Wiki / 1409.3215

Sequence to Sequence Learning with Neural Networks

Ilya Sutskever Google [email protected], Vinyals Oriol [email protected], Google, Quoc V Le Google (2014)

Paper Information
arXiv ID
Venue
Neural Information Processing Systems
Domain
Natural language processing
SOTA Claim
Yes
Reproducibility
7/10

Abstract

Deep Neural Networks (DNNs) are powerful models that have achieved excellent performance on difficult learning tasks. Although DNNs work well whenever large labeled training sets are available, they cannot be used to map sequences to sequences. In this paper, we present a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure. Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector. Our main result is that on an English to French translation task from the WMT-14 dataset, the translations produced by the LSTM achieve a BLEU score of 34.8 on the entire test set, where the LSTM's BLEU score was penalized on out-of-vocabulary words. Additionally, the LSTM did not have difficulty on long sentences. For comparison, a phrase-based SMT system achieves a BLEU score of 33.3 on the same dataset. When we used the LSTM to rerank the 1000 hypotheses produced by the aforementioned SMT system, its BLEU score increases to 36.5, which is close to the previous state of the art. The LSTM also learned sensible phrase and sentence representations that are sensitive to word order and are relatively invariant to the active and the passive voice. Finally, we found that reversing the order of the words in all source sentences (but not target sentences) improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

Summary

This paper presents a method for sequence-to-sequence learning using Long Short-Term Memory (LSTM) neural networks. It proposes an end-to-end architecture that first maps an input sequence to a fixed-dimensional vector using a multilayered LSTM, and then decodes the target sequence from this representation with another LSTM. The main focus is on machine translation, particularly the English to French translation task using the WMT'14 dataset. The paper reports significant results, achieving a BLEU score of 34.8, surpassing traditional phrase-based systems, and even scores of 36.5 when used for reranking. The authors also make a notable observation that reversing the order of words in input sentences enhances the model's performance by introducing short-term dependencies. The results indicate that LSTMs can effectively manage longer sentences and complex mappings between sequences, which traditionally confound standard RNN implementations.

Methods

This paper employs the following methods:

  • LSTM

Models Used

  • LSTM

Datasets

The following datasets were used in this research:

  • WMT'14

Evaluation Metrics

  • BLEU

Results

  • Achieved a BLEU score of 34.8 on the WMT'14 dataset for English to French translation
  • Improved BLEU score to 36.5 by reranking hypotheses from a phrase-based SMT system

Limitations

The authors identified the following limitations:

  • Limited vocabulary in the training set introduces challenges with out-of-vocabulary words.

Technical Requirements

  • Number of GPUs: 8
  • GPU Type: None specified

Keywords

deep neural networks LSTM sequence to sequence learning machine translation recurrent neural networks

Papers Using Similar Methods

External Resources