📊 Showing 3 results | 📏 Metric: Pearson Correlation
Rank | Model | Paper | Pearson Correlation | Date | Code |
---|---|---|---|---|---|
1 | BioLinkBERT (large) | LinkBERT: Pretraining Language Models with Document Links | 0.94 | 2022-03-29 | 📦 michiyasunaga/LinkBERT |
2 | BioLinkBERT (base) | LinkBERT: Pretraining Language Models with Document Links | 0.93 | 2022-03-29 | 📦 michiyasunaga/LinkBERT |
3 | NCBI_BERT(base) (P+M) | Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets | 0.92 | 2019-06-13 | 📦 ncbi-nlp/NCBI_BERT 📦 bigscience-workshop/biomedical 📦 ncbi-nlp/BLUE_Benchmark 📦 gmpoli/electramed |