StoryCloze

Dataset Information
Introduced
2016
License
Unknown
Homepage

Overview

Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. This issue is particularly challenging for understanding casual and correlational relationships between events. While this topic has received a lot of interest in the NLP community, research has been hindered by the lack of a proper evaluation framework. This paper attempts to address this problem with a new framework for evaluating story understanding and script learning: the 'Story Cloze Test'. This test requires a system to choose the correct ending to a four-sentence story. We created a new corpus of ~50k five-sentence commonsense stories, ROCStories, to enable this evaluation. This corpus is unique in two ways: (1) it captures a rich set of causal and temporal commonsense relations between daily events, and (2) it is a high quality collection of everyday life stories that can also be used for story generation. Experimental evaluation shows that a host of baselines and state-of-the-art models based on shallow language understanding struggle to achieve a high score on the Story Cloze Test. We discuss these implications for script and story learning, and offer suggestions for deeper language understanding.

Variants: Story Cloze Test, StoryCloze

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Question Answering T0-3B (CoT fine-tuned) The CoT Collection: Improving Zero-shot … 2023-05-23
Question Answering RoE-3B Exploring the Benefits of Training … 2023-02-07
Question Answering OPT-175B (50% Sparsity) SparseGPT: Massive Language Models Can … 2023-01-02
Question Answering OPT-175B SparseGPT: Massive Language Models Can … 2023-01-02
Question Answering SparseGPT (175B, 50% Sparsity) SparseGPT: Massive Language Models Can … 2023-01-02
Question Answering SparseGPT (175B, 4:8 Sparsity) SparseGPT: Massive Language Models Can … 2023-01-02
Question Answering SparseGPT (175B, 2:4 Sparsity) SparseGPT: Massive Language Models Can … 2023-01-02
Question Answering BLOOMZ Crosslingual Generalization through Multitask Finetuning 2022-11-03
Question Answering KiC-770M Knowledge-in-Context: Towards Knowledgeable Semi-Parametric Language … 2022-10-28
Question Answering Flipped-3B Guess the Instruction! Flipped Learning … 2022-10-06
Question Answering Base Layers 10B (0-shot) Efficient Language Modeling with Sparse … 2022-03-14
Question Answering Switch Transformer 9B Efficient Language Modeling with Sparse … 2022-03-14
Question Answering sMLP – deterministic 9.4B (0-shot) Efficient Language Modeling with Sparse … 2022-03-14
Question Answering Gshard 9B Efficient Language Modeling with Sparse … 2022-03-14
Question Answering HASH Layers 10B (0-shot) Efficient Language Modeling with Sparse … 2022-03-14
Question Answering FLAN 137B (few-shot, k=10) Finetuned Language Models Are Zero-Shot … 2021-09-03
Question Answering FLAN 137B (zero-shot) Finetuned Language Models Are Zero-Shot … 2021-09-03
Question Answering GPT-3 Large 760M (zero-shot) Language Models are Few-Shot Learners 2020-05-28
Question Answering Reading Strategies Model Improving Machine Reading Comprehension with … 2018-10-31
Question Answering val-LS-skip A Simple and Effective Approach … 2018-03-15

Research Papers

Recent papers with results on this dataset: