📊 Showing 1 results | 📏 Metric: F1
Rank | Model | Paper | F1 | Date | Code |
---|---|---|---|---|---|
1 | AlephBERTGimmel-base MTL | Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All | 80.39 | 2022-11-28 | - |