ML Research Wiki / Benchmarks / Natural Language Understanding / GLUE

GLUE

Natural Language Understanding Benchmark

Performance Over Time

📊 Showing 2 results | 📏 Metric: Average

Top Performing Models

Rank Model Paper Average Date Code
1 MT-DNN-SMART SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization 89.90 2019-11-08 📦 namisan/mt-dnn 📦 microsoft/MT-DNN 📦 archinetai/smart-pytorch
2 BERT-LARGE BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding 82.10 2018-10-11 📦 huggingface/transformers 📦 tensorflow/models 📦 labmlai/annotated_deep_learning_paper_implementations

All Papers (2)