📊 Showing 5 results | 📏 Metric: F1
Rank | Model | Paper | F1 | Date | Code |
---|---|---|---|---|---|
1 | VarMAE | VarMAE: Pre-training of Variational Masked Autoencoder for Domain-adaptive Language Understanding | 76.01 | 2022-11-01 | - |
2 | PubMedBERT uncased 📚 | Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing | 73.38 | 2020-07-31 | 📦 bionlu-coling2024/biomed-ner-intent_detection 📦 rohanshad/cmr_transformer |
3 | SciBERT (SciVocab) 📚 | SciBERT: A Pretrained Language Model for Scientific Text | 71.18 | 2019-03-26 | 📦 allenai/scibert 📦 charles9n/bert-sklearn 📦 tetsu9923/scireviewgen |
4 | SciBERT (Base Vocab) 📚 | SciBERT: A Pretrained Language Model for Scientific Text | 70.82 | 2019-03-26 | 📦 allenai/scibert 📦 charles9n/bert-sklearn 📦 tetsu9923/scireviewgen |
5 | bi-LSTM | A Corpus with Multi-Level Annotations of Patients, Interventions and Outcomes to Support Language Processing for Medical Literature | 66.30 | 2018-06-11 | 📦 devkotasabin/EBM-NLP 📦 jetsunwhitton/rct-art |