LexGLUE

Dataset Information
Modalities
Texts
Languages
English
Introduced
2021
License
Unknown
Homepage

Overview

Legal General Language Understanding Evaluation (LexGLUE) benchmark is a collection of datasets for evaluating model performance across a diverse set of legal NLU tasks in a standardized way.

Image source: https://arxiv.org/pdf/2110.00976v1.pdf

Variants: LexGLUE

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Natural Language Understanding BERT LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding Legal-BERT LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding CaseLaw-BERT LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding BigBird LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding Longformer LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding RoBERTa LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding DeBERTa LexGLUE: A Benchmark Dataset for … 2021-10-03
Natural Language Understanding Optimised SVM Baseline The Unreasonable Effectiveness of the … 2021-09-15

Research Papers

Recent papers with results on this dataset: