gpt-4-0125-preview
|
GPT-4 Technical Report
|
52.49
|
2023-03-15
|
|
gpt-3.5-turbo
|
Language Models are Few-Shot Learners
|
37.06
|
2020-05-28
|
|
dmis-lab/biobert-v1.1
|
BioBERT: a pre-trained biomedical language repres…
|
26.15
|
2019-01-25
|
|
meta-llama/Meta-Llama-3-8B-Instruct
|
LLaMA: Open and Efficient Foundation Language Mod…
|
25.84
|
2023-02-27
|
|
epfl-llm/meditron-7b
|
MEDITRON-70B: Scaling Medical Pretraining for Lar…
|
25.75
|
2023-11-27
|
|
dmis-lab/meerkat-7b-v1.0
|
Small Language Models Learn Enhanced Reasoning Sk…
|
25.68
|
2024-03-30
|
|
HuggingFaceH4/zephyr-7b-beta
|
Zephyr: Direct Distillation of LM Alignment
|
25.54
|
2023-10-25
|
|
epfl-llm/meditron-70b
|
MEDITRON-70B: Scaling Medical Pretraining for Lar…
|
25.36
|
2023-11-27
|
|
yikuan8/Clinical-Longformer
|
Clinical-Longformer and Clinical-BigBird: Transfo…
|
25.04
|
2022-01-27
|
|
UFNLP/gatortron-medium
|
GatorTron: A Large Clinical Language Model to Unl…
|
24.86
|
2022-02-02
|
|
PharMolix/BioMedGPT-LM-7B
|
BioMedGPT: Open Multimodal Generative Pre-trained…
|
24.75
|
2023-08-18
|
|
BioMistral/BioMistral-7B-DARE
|
BioMistral: A Collection of Open-Source Pretraine…
|
24.57
|
2024-02-15
|
|