ChemProt

Dataset Information
Modalities
Texts, Biomedical
License
Unknown
Homepage

Overview

ChemProt consists of 1,820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI text mining chemical-protein interactions shared task.

Source: Peng et al.

Variants: ChemProt

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Relation Extraction BioLinkBERT (large) LinkBERT: Pretraining Language Models with … 2022-03-29
Relation Extraction SciFive Large SciFive: a text-to-text transformer model … 2021-05-28
Relation Extraction BioT5X (base) SciFive: a text-to-text transformer model … 2021-05-28
Relation Extraction KeBioLM Improving Biomedical Pretrained Language Models … 2021-04-21
Relation Extraction ELECTRAMed ELECTRAMed: a new pre-trained language … 2021-04-19
Relation Extraction CharacterBERT (base, medical) CharacterBERT: Reconciling ELMo and BERT … 2020-10-20
Relation Extraction BioMegatron BioMegatron: Larger Biomedical Domain Language … 2020-10-12
Relation Extraction PubMedBERT uncased Domain-Specific Language Model Pretraining for … 2020-07-31
Relation Extraction NCBI_BERT(large) (P) Transfer Learning in Biomedical Natural … 2019-06-13
Relation Extraction SciBert (Finetune) SciBERT: A Pretrained Language Model … 2019-03-26
Relation Extraction SciBERT (Base Vocab) SciBERT: A Pretrained Language Model … 2019-03-26
Relation Extraction BioBERT BioBERT: a pre-trained biomedical language … 2019-01-25

Research Papers

Recent papers with results on this dataset: