A new open-vocabulary language modelling benchmark derived from books.
Source: Compressive Transformers for Long-Range Sequence Modelling
Variants: PG-19
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Dialogue Generation | ∞-former (Sticky memories + initialized GPT-2 Small) | $\infty$-former: Infinite Memory Transformer | 2021-09-01 |
Recent papers with results on this dataset: