CamVid (Cambridge-driving Labeled Video Database) is a road/driving scene understanding database which was originally captured as five video sequences with …
Stack of 2D gray images of glass fiber-reinforced polyamide 66 (GF-PA66) 3D X-ray Computed Tomography (XCT) specimen. Usage: 2D/3D image …
A Multi-Task 4D Radar-Camera Fusion Dataset for Autonomous Driving on Water Surfaces description of the dataset * WaterScenes, the first …
WildScenes is a bi-modal benchmark dataset consisting of multiple large-scale, sequential traversals in natural environments, including semantic annotations in high-resolution …
The xBD dataset contains over 45,000KM2 of polygon labeled pre and post disaster imagery. The dataset provides the post-disaster imagery …
Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants …
NTU RGB+D is a large-scale dataset for RGB-D human action recognition. It involves 56,880 samples of 60 action classes collected …
Benchmark for AMR Metrics based on Overt Objectives (Bamboo), the first benchmark to support empirical assessment of graph-based MR similarity …
This corpus includes annotations of cancer-related PubMed articles, covering 3 full papers (PMID:24651010, PMID:11777939, PMID:15630473) as well as the result …
Abstract Meaning Representation (AMR) Annotation Release 2.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University …
Abstract Meaning Representation (AMR) Annotation Release 3.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University …
New3, a set of 527 instances from AMR 3.0, whose original source was the LORELEI DARPA project – not included …
This corpus is an annotation of the novel The Little Prince by Antoine de Saint-Exupéry, published in 1943. We were …
To study the task of email subject line generation: automatically generating an email subject line from the email body. Source: …
CNN/Daily Mail is a dataset for text summarization. Human generated abstractive summary bullets were generated from news stories in CNN …
WikiHow is a dataset of more than 230,000 article and summary pairs extracted and constructed from an online knowledge base …
JerichoWorld is a dataset that enables the creation of learning agents that can build knowledge graph-based world models of interactive …
Although research on author profiling has quite progressed in abundant resources languages, it is still infancy for limited resources languages …
The CATT benchmark dataset comprises 742 sentences, which were scraped from an internet news source in 2023. It covers multiple …
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of …
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of …
Most of the aspect based sentiment analysis research aims at identifying the sentiment polarities toward some explicit aspect terms while …
Aspect-based sentiment analysis (ABSA) typically focuses on extracting aspects and predicting their sentiments on individual sentences such as customer reviews. …
Target-based sentiment analysis or aspect-based sentiment analysis (ABSA) refers to addressing various sentiment analysis tasks at a fine-grained level, which …
MAMS is a challenge dataset for aspect-based sentiment analysis (ABSA), in which each sentences contain at least two aspects with …
Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of …
Aspect-based sentiment analysis (ABSA) aims to detect the targets (which are composed by continuous words), aspects and sentiment polarities in …
This dataset is a real-world web page collection used for research on the automatic extraction of structured data (e.g., attribute-value …
The dataset contains product information from AliExpress Sports & Entertainment category. Each attribute value in "Item Specific" is matched against …
The dataset contains 3 million attribute-value annotations across 1257 unique categories created from 2.2 million cleaned Amazon product profiles. It …
The dataset contains Amazon products from 10 product categories with full human annotations. The dataset was collected in 2021. The …
The dataset contains product information from AliExpress Sports & Entertainment category. Each attribute value in "Item Specific" is matched against …
The dataset contains 3 million attribute-value annotations across 1257 unique categories created from 2.2 million cleaned Amazon product profiles. It …
The dataset contains Amazon products from 10 product categories with full human annotations. The dataset was collected in 2021. The …
The datasets contains 1,420 human annotated product offers, systematically selected from the Web Data Commons Product Matching Corpus, featuring 24,582 …
There are eight essay sets. Each of the sets of essays was generated from a single prompt. Selected essays range …
A large-scale natural dataset in English to measure stereotypical biases in four domains: gender, profession, race, and religion. Source: [StereoSet: …
The Innodata Red Teaming Prompts aims to rigorously assess models’ factuality and safety. This dataset, due to its manual creation …
The TII-SSRC-23 dataset offers a comprehensive collection of network traffic patterns, meticulously compiled to support the development and research of …
[Real or Fake] : Fake Job Description Prediction This dataset contains 18K job descriptions out of which about 800 are …
Kickstarter is a community of more than 10 million people comprising of creative, tech enthusiasts who help in bringing creative …
Don’t Patronize Me! (DPM) is an annotated dataset with Patronizing and Condescending Language towards vulnerable communities.
The TweepFake dataset consists of 25,572 social media messages posted either by bots or humans on Twitter. Each bot imitated …
CCGbank is a translation of the Penn Treebank into a corpus of Combinatory Categorial Grammar derivations. It pairs syntactic derivations …
The AlpacaEval set contains 805 instructions form self-instruct, open-assistant, vicuna, koala, hh-rlhf. Those were selected so that the AlpacaEval ranking …
The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall …
Data Set Information: Extraction was done by Barry Becker from the 1994 Census database. A set of reasonably clean records …
In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-1M Insect …
The purpose of this dataset was to study gender bias in occupations. Online biographies, written in English, were collected to …
BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring – they are …
This dataset is a combination of the following three datasets : figshare, SARTAJ dataset and Br35H This dataset contains 7022 …
The quality of AI-generated images has rapidly increased, leading to concerns of authenticity and trustworthiness. CIFAKE is a dataset that …
The CIFAR-100 dataset (Canadian Institute for Advanced Research, 100 classes) is a subset of the Tiny Images dataset and consists …
Common corruptions dataset for CIFAR10
Contains hundreds of frontal view X-rays and is the largest public resource for COVID-19 image and prognostic data, making it …
Data was collected for normal bearings, single-point drive end and fan end defects. Data was collected at 12,000 samples/second and …
The normal chest X-ray (left panel) depicts clear lungs without any areas of abnormal opacification in the image. Bacterial pneumonia …
We construct the ForgeryNet dataset, an extremely large face forgery dataset with unified annotations in image- and video-level data across …
A public data set of walking full-body kinematics and kinetics in individuals with Parkinson’s disease
HOWS-CL-25 (Household Objects Within Simulation dataset for Continual Learning) is a synthetic dataset especially designed for object classification on mobile …
The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. …
The IRFL dataset consists of idioms, similes, and metaphors with matching figurative and literal images, as well as two novel …
The goal for ISIC 2019 is classify dermoscopic images among nine different diagnostic categories.25,331 images are available for training across …
This dataset was presented as part of the ICLR 2023 paper 𝘈 𝘧𝘳𝘢𝘮𝘦𝘸𝘰𝘳𝘬 𝘧𝘰𝘳 𝘣𝘦𝘯𝘤𝘩𝘮𝘢𝘳𝘬𝘪𝘯𝘨 𝘊𝘭𝘢𝘴𝘴-𝘰𝘶𝘵-𝘰𝘧-𝘥𝘪𝘴𝘵𝘳𝘪𝘣𝘶𝘵𝘪𝘰𝘯 𝘥𝘦𝘵𝘦𝘤𝘵𝘪𝘰𝘯 𝘢𝘯𝘥 𝘪𝘵𝘴 𝘢𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯 …
Dataset Introduction In this work, we introduce the In-Diagram Logic (InDL) dataset, an innovative resource crafted to rigorously evaluate the …
This data set comprises 22 fundus images with their corresponding manual annotations for the blood vessels, separated as arteries and …
The Liver-US dataset is a comprehensive collection of high-quality ultrasound images of the liver, including both normal and abnormal cases. …
The minimalist histopathology image analysis dataset (MHIST) is a binary classification dataset of 3,152 fixed-size images of colorectal polyps, each …
The process by which sections in a document are demarcated and labeled is known as section identification. Such sections are …
MixedWM38 Dataset(WaferMap) has more than 38000 wafer maps, including 1 normal pattern, 8 single defect patterns, and 29 mixed defect …
Early detection of retinal diseases is one of the most important means of preventing partial or permanent blindness in patients. …
A large real-world event-based dataset for object classification. Source: HATS: Histograms of Averaged Time Surfaces for Robust Event-based Object Classification
The N-ImageNet dataset is an event-camera counterpart for the ImageNet dataset. The dataset is obtained by moving an event camera …
The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries …
he RSSCN7 dataset contains satellite images acquired from Google Earth, which is originally collected for remote sensing scene classification. We …
The Recognizing Textual Entailment (RTE) datasets come from a series of textual entailment challenges. Data from RTE1, RTE2, RTE3 and …
The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. …
This dataset is based on the Spiking Heidelberg Digits (SHD) dataset. Sample inputs consist of two spike encoded digits sampled …
The SPOTS-10 dataset is an extensive collection of grayscale images showcasing diverse patterns found in ten animal species. Specifically, SPOTS-10 …
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the …
Sentiment140 is a dataset that allows you to discover the sentiment of a brand, product, or topic on Twitter. Source: …
This dataset consists of computer-generated images for gas leakage segmentation. It features diverse backgrounds, interfering foreground objects, and precise ground …
arxiv : https://arxiv.org/abs/2304.11708 Accepted at 29th International Congress on Sound and Vibration (ICSV29). The drone has been used for various …
Table-ACM12K (TACM12K) is a relational table dataset derived from the ACM heterogeneous graph dataset. It includes four tables: papers, authors, …
Table-LastFm2K (TLF2K) is a relational table dataset derived from the classical LastFM2K dataset. It contains three tables: artists, user_artists, and …
Table-MovieLens1M (TML1M) is a relational table dataset derived from the classical MovieLens1M dataset. It consists of three tables: users, movies, …
The Winograd Schema Challenge was introduced both as an alternative to the Turing Test and as a test of a …
WiC is a benchmark for the evaluation of context-sensitive word embeddings. WiC is framed as a binary classification task. Each …
Enlarge the dataset to understand how image background effect the Computer Vision ML model. With the following topics: Blur Background …
2010 i2b2/VA is a biomedical dataset for relation classification and entity typing.
2010 i2b2/VA is a biomedical dataset for relation classification and entity typing.
The APPS dataset consists of problems collected from different open-access coding websites such as Codeforces, Kattis, and more. The APPS …
A new large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develop a new …
The CMU CoNaLa, the Code/Natural Language Challenge dataset is a joint project from the Carnegie Mellon University NeuLab and Strudel …
The CoNaLa Extended With Question Text is an extension to the original CoNaLa Dataset (Papers With Code Link) proposed in …
CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training AlphaCode. It consists of programming problems, …
In this paper, we introduce a novel benchmarking framework designed specifically for evaluations of data science agents. Our contributions are …
The Django dataset is a dataset for code generation comprising of 16000 training, 1000 development and 1805 test annotations. Each …
the FloCo dataset that contains 11,884 flowchart images and their corresponding Python codes.
This is an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained …
Extension test cases of HumanEval, as well as generated code.
The benchmark consists of around 1,000 crowd-sourced Python programming problems, designed to be solvable by entry-level programmers, covering programming fundamentals, …
Recent advancements in large language models (LLMs) have showcased their exceptional abilities across various tasks, such as code generation, problem-solving …
RES-Q is a natural language instruction-based benchmark for evaluating $\textbf{R}$epository $\textbf{E}$diting $\textbf{S}$ystems, which consists of 100 handcrafted repository editing tasks …
Shellcode_IA32 is a dataset containing 20 years of shellcodes from a variety of sources is the largest collection of shellcodes …
TACO (Topics in Algorithmic Code generation dataset) is a dataset focused on algorithmic code generation, designed to provide a more …
$\textbf{Turbulence}$ is a new benchmark for systematically evaluating the correctness and robustness of instruction-tuned large language models (LLMs) for code …
Verified Smart Contracts Code Comments is a dataset of real Ethereum smart contract functions, containing "code, comment" pairs of both …
Test-driven benchmark to challenge LLMs to write JavaScript React application GitHub Script
Test-driven benchmark to challenge LLMs to write long JavaScript React application GitHub Script
WikiSQL consists of a corpus of 87,726 hand-annotated SQL query and natural language question pairs. These SQL queries are further …
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of …
The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each. …
Choice of Plausible Alternatives for Russian language (PARus) evaluation provides researchers with a tool for assessing progress in open-domain commonsense …
A Winograd schema is a pair of sentences that differ in only one or two words and that contain an …
Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) is a large-scale reading comprehension dataset which requires commonsense reasoning. ReCoRD consists of …
Russian reading comprehension with Commonsense reasoning (RuCoS) is a large-scale reading comprehension dataset that requires commonsense reasoning. RuCoS consists of …
Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate …
This dataset is collected via the WinoGAViL game to collect challenging vision-and-language associations. Inspired by the popular card game Codenames, …
WinoGrande is a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the …
This is a dataset of 3 English books which do not contain the letter "e" in them. This dataset includes …
The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall …
This dataset has 20 classes and each class has about 1000 documents. The data split for train/validation/test is 1600/200/200. We …
AIDS is a graph dataset. It consists of 2000 graphs representing molecular compounds which are constructed from the AIDS Antiviral …
A set of 10 DSC datasets (reviews of 10 products) to produce sequences of tasks. The products are Sports, Toys, …
F-CelebA - This dataset is adapted from federated learning. Federated learning is an emerging machine learning paradigm with an emphasis …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
Permuted MNIST is an MNIST variant that consists of 70,000 images of handwritten digits from 0 to 9, where 60,000 …
ConvFinQA is a dataset designed to study the chain of numerical reasoning in conversational question answering. The dataset contains 3892 …
Advising Corpus is a dataset based on an entirely new collection of dialogues in which university students are being advised …
We release Douban Conversation Corpus, comprising a training data set, a development set and a test set for retrieval based …
We release E-commerce Dialogue Corpus, comprising a training data set, a development set and a test set for retrieval based …
| | Train | Validation | Test | Ranking Test | | --------- | ----- | ---------- | ------- | …
| | Train | Validation | Test | Ranking Test | | --------- | ----- | ---------- | ------- | …
The Ubuntu IRC dataset is a valuable resource for research in natural language understanding and dialogue systems. Let me provide …
WebLINX is a large-scale benchmark of 100K interactions across 2300 expert demonstrations of conversational web navigation. It covers a broad …
The 'Deutsche Welle corpus for Information Extraction' (DWIE) is a multi-task dataset that combines four main Information Extraction (IE) annotation …
The DocRED Information Extraction (DocRED-IE) dataset extends the DocRED dataset for the Document-level Closed Information Extraction (DocIE) task. DocRED-IE is …
GAP is a graph processing benchmark suite with the goal of helping to standardize graph processing evaluations. Fewer differences between …
LitBank is an annotated dataset of 100 works of English-language fiction to support tasks in natural language processing and the …
OntoGUM is an OntoNotes-like coreference dataset converted from GUM, an English corpus covering 12 genres using deterministic rules.
A large-scale English dataset for coreference resolution. The dataset is designed to embody the core challenges in coreference, such as …
Consists of multiple sentences whose clues are arranged by difficulty (from obscure to obvious) and uniquely identify a well-known entity …
WikiCoref is an English corpus annotated for anaphoric relations, where all documents are from the English version of Wikipedia. Source: …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
The Cross-lingual Choice of Plausible Alternatives (XCOPA) dataset is a benchmark to evaluate the ability of machine learning models to …
The CUHK-PEDES dataset is a caption-annotated pedestrian dataset. It contains 40,206 images over 13,003 persons. Images are collected from five …
Dataset contains 33,010 molecule-description pairs split into 80\%/10\%/10\% train/val/test splits. The goal of the task is to retrieve the relevant …
The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators. Source: [Guiding …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
The Remote Sensing Image Captioning Dataset (RSICD) is a dataset for remote sensing image captioning task. It contains more than …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
Recipe1M+ is a dataset which contains one million structured cooking recipes with 13M associated images. Source: [Recipe1M+: A Dataset for …
SoundingEarth consists of co-located aerial imagery and audio samples all around the world.
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
The CIFAR-10 database (Canadian Institute For Advanced Research database) is a large collection of natural color images. It has a …
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the …
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 …
The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct …
Abstract Meaning Representation (AMR) Annotation Release 3.0 was developed by the Linguistic Data Consortium (LDC), SDL/Language Weaver, Inc., the University …
DART is a large dataset for open-domain structured data record to text generation. DART consists of 82,191 examples across different …
End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from …
GenWiki is a large-scale dataset for knowledge graph-to-text (G2T) and text-to-knowledge graph (T2G) conversion. It is introduced in the paper …
A new dataset on the baseball domain. Source: Data-to-text Generation with Entity Modeling
This dataset consists of (human-written) NBA basketball game summaries aligned with their corresponding box- and line-scores. Summaries taken from rotowire.com …
ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a …
The ViGGO corpus is a set of 6,900 meaning representation to natural language utterance pairs in the video game domain. …
The WebNLG corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in …
This dataset gathers 428,748 person and 12,236 animal infobox with descriptions based on Wikipedia dump (2018/04/01) and Wikidata (2018/04/12).
It consists of an extensive collection of a high quality cross-lingual fact-to-text dataset in 11 languages: Assamese (as), Bengali (bn), …
The MNIST database (Modified National Institute of Standards and Technology database) is a large collection of handwritten digits. It has …
USPS is a digit dataset automatically scanned from envelopes by the U.S. Postal Service containing a total of 9,298 16×16 …
The task builds on the CoNLL-2008 task and extends it to multiple languages. The core of the task is to …
Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme. …
The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall …
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its …
The Universal Dependencies (UD) project seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for multiple languages. The …
In this paper, we propose Text-based Open Molecule Generation Benchmark (TOMG-Bench), the first benchmark to evaluate the open-domain molecule generation …
This dataset was collected with the goal of assessing dialog evaluation metrics. In the paper, USR: An Unsupervised and Reference …
This dataset was collected with the goal of assessing dialog evaluation metrics. In the paper, USR: An Unsupervised and Reference …
FusedChat is an inter-mode dialogue dataset. It contains dialogue sessions fusing task-oriented dialogues (TOD) and open-domain dialogues (ODD). Based on …
Harry Potter Dialogue is the first dialogue dataset that integrates with scene, attributes and relations which are dynamically changed as …
A new open-vocabulary language modelling benchmark derived from books. Source: Compressive Transformers for Long-Range Sequence Modelling
CANARD is a dataset for question-in-context rewriting that consists of questions each given in a dialog context together with a …
This discourse treebank includes annotated instructional texts originally assembled at the Information Technology Research Institute, University of Brighton. This dataset …
A machine reading comprehension (MRC) dataset with discourse structure built over multiparty dialog. Molweni's source samples from the Ubuntu Chat …
The Rhetorical Structure Theory (RST) Discourse Treebank consists of 385 Wall Street Journal articles from the Penn Treebank annotated with …
EPHOIE is a fully-annotated dataset which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE …
The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 …
The Hallmarks of Cancer (*HOC) corpus consists of 1852 PubMed publication abstracts manually annotated by experts according to the Hallmarks …
Hyperpartisan News Detection was a dataset created for PAN @ SemEval 2019 Task 4. Given a news article text, decide …
LUN is used for unreliable news source classification, this dataset includes 17,250 articles from satire, propaganda, and hoaxe.
The Reuters-21578 dataset is a collection of documents with news articles. The original corpus has 10,369 documents and a vocabulary …
Arxiv HEP-TH (high energy physics theory) citation graph is from the e-print arXiv and covers all the citations within a …
This is a dataset for evaluating summarisation methods for research papers. Source: [A Discourse-Aware Attention Model for Abstractive Summarization of …
The Tobacco-3482 dataset consists of document images belonging to 10 classes such as letter, form, email, resume, memo, etc. The …
Bc8BioRED is built upon BioRED 2022 with the addition of directionality annotations. The training and development sets from the original …
The 'Deutsche Welle corpus for Information Extraction' (DWIE) is a multi-task dataset that combines four main Information Extraction (IE) annotation …
The DocRED Information Extraction (DocRED-IE) dataset extends the DocRED dataset for the Document-level Closed Information Extraction (DocIE) task. DocRED-IE is …
The Re-DocRED Dataset resolved the following problems of DocRED: 1. Resolved the incompleteness problem by supplementing large amounts of relation …
13,201 clips from 79 TV shows. Each video clip was manually annotated with six emotion categories, including “anger”, “disgust”, “fear”, …
CMU Multimodal Opinion Sentiment and Emotion Intensity (CMU-MOSEI) is the largest dataset of sentence-level sentiment analysis and emotion recognition in …
The MFA (Many Faces of Anger) dataset includes 200 in-the-wild videos from North American and Persian cultures with fine-grained labels …
ROCStories is a collection of commonsense short stories. The corpus consists of 100,000 five-sentence stories. Each story logically follows everyday …
The EMOTIC dataset, named after EMOTions In Context, is a database of images with people in real environments, annotated with …
1000 songs has been selected from Free Music Archive (FMA). The excerpts which were annotated are available in the same …
Fer2013 contains approximately 30,000 facial RGB images of different expressions with size restricted to 48×48, and the main labels of …
The MSP-Podcast corpus contains speech segments from podcast recordings which are perceptually annotated using crowdsourcing. The collection of this corpus …
The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) contains 7,356 files (total size: 24.8 GB). The database contains …
The SEED dataset contains subjects' EEG signals when they were watching films clips. The film clips are carefully selected so …
This dataset contains benchmark scores for EQ-Bench, a novel benchmark designed to evaluate aspects of emotional intelligence in Large Language …
The DBP2.0 dataset can be downloaded from the figshare repository. It has three entity alignment settings, i.e., ZH-EN, JA-EN and …
The AQUAINT Corpus consists of newswire text data in English, drawn from three sources: the Xinhua News Service (People's Republic …
The DocRED Information Extraction (DocRED-IE) dataset extends the DocRED dataset for the Document-level Closed Information Extraction (DocIE) task. DocRED-IE is …
A large new multilingual dataset for multilingual entity linking. Source: Entity Linking in 100 Languages
AIDA/testc is a new challenging test set for entity linking systems containing 131 Reuters news articles published between December 5th …
EC-FUNSD is introduced in [arXiv:2402.02379] as a benchmark of semantic entity recognition (SER) and entity linking (EL), designed for the …
The FIGER dataset is an entity recognition dataset where entities are labelled using fine-grained system 112 tags, such as person/doctor, …
Form Understanding in Noisy Scanned Documents (FUNSD) comprises 199 real, fully annotated, scanned forms. The documents are noisy and vary …
GUM is an open source multilayer English corpus of richly annotated texts from twelve text types. Annotations include: * Multiple …
MedMentions is a new manually annotated resource for the recognition of biomedical concepts. What distinguishes MedMentions from other annotated biomedical …
Wikipedia abstracts automatically annotated with WikiData entities and relations that are entailed by the text. Over 9 million triplets.
full_set_RD_ann_MIMIC_III_disch.csv
. The data …WiC-TSV is a new multi-domain evaluation benchmark for Word Sense Disambiguation. More specifically, it is a framework for Target Sense …
The Abt-Buy dataset for entity resolution derives from the online retailers Abt.com and Buy.com. The dataset contains 1081 entities from …
The Amazon-Google dataset for entity resolution derives from the online retailers Amazon.com and the product search service of Google accessible …
WDC Products is an entity matching benchmark which provides for the systematic evaluation of matching systems along combinations of three …
The DocRED Information Extraction (DocRED-IE) dataset extends the DocRED dataset for the Document-level Closed Information Extraction (DocIE) task. DocRED-IE is …
The FIGER dataset is an entity recognition dataset where entities are labelled using fine-grained system 112 tags, such as person/doctor, …
The Open Entity dataset is a collection of about 6,000 sentences with fine-grained entity types annotations. The entity types are …
The GENIA corpus is the primary collection of biomedical literature compiled and annotated within the scope of the GENIA project. …
CLEVR-X is a dataset that extends the CLEVR dataset with natural language explanations in the context of VQA. It consists …
Visual Commonsense Reasoning (VCR) is a large-scale dataset for cognition-level visual understanding. Given a challenging question about an image, machines …
WHOOPS! Is a dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers …
e-SNLI-VE is a large VL (vision-language) dataset with NLEs (natural language explanations) with over 430k instances for which the explanations …
DebateSum consists of 187328 debate documents, arguments (also can be thought of as abstractive summaries, or queries), word-level extractive summaries, …
GovReport is a dataset for long document summarization, with significantly longer documents and summaries. It consists of reports written by …
CiteSum is a large-scale scientific extreme summarization benchmark.
TLDR9+ is a large-scale summarization dataset containing over 9 million training instances extracted from Reddit discussion forum. This dataset is …
The Extreme Summarization (XSum) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create …
ArgSciChat is an argumentative dialogue dataset. It consists of 498 messages collected from 41 dialogues on 20 scientific papers. It …
FEVER is a publicly available dataset for fact extraction and verification against textual sources. It consists of 185,445 claims manually …
Along with COVID-19 pandemic we are also fighting an `infodemic'. Fake news and rumors are rampant on social media. Believing …
FNC-1 was designed as a stance detection dataset and it contains 75,385 labeled headline and article pairs. The pairs are …
LIAR is a publicly available dataset for fake news detection. A decade-long of 12.8K manually labeled short statements were collected …
For RAWFC, we constructed it from scratch by collecting the claims from Snopes and relevant raw reports by retrieving claim …
The Weibo NER dataset is a Chinese Named Entity Recognition dataset drawn from the social media website Sina Weibo. Source: …
CaseHOLD (Case Holdings On Legal Decisions) is a law dataset comprised of over 53,000+ multiple choice questions to identify the …
The Describable Textures Dataset (DTD) contains 5640 texture images in the wild. They are annotated with human-centric attributes inspired by …
Eurosat is a dataset and deep learning benchmark for land use and land cover classification. The dataset is based on …
"We built a large lung CT scan dataset for COVID-19 by curating data from 7 public datasets listed in the …
MR Movie Reviews is a dataset for use in sentiment-analysis experiments. Available are collections of movie-review documents labeled with respect …
Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of 5,801 sentence pairs collected from newswire articles. Each pair is …
MedConceptsQA - Open Source Medical Concepts QA Benchmark The benchmark can be found here: https://huggingface.co/datasets/ofir408/MedConceptsQA
The MedNLI dataset consists of the sentence pairs developed by Physicians from the Past Medical History section of MIMIC-III clinical …
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary …
The Scene UNderstanding (SUN) database contains 899 categories and 130,519 images. There are 397 well-sampled categories to evaluate numerous state-of-the-art …
The Stanford Cars dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. …
UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These …
The RAFT benchmark (Realworld Annotated Few-shot Tasks) focuses on naturally occurring tasks and uses an evaluation setup that mirrors deployment. …
The SST-5, also known as the Stanford Sentiment Treebank with 5 labels, is a dataset used for sentiment analysis. The …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
GSM8K is a dataset of 8.5K high quality linguistically diverse grade school math word problems created by human problem writers. …
This dataset contains 7984 user comments from an Austrian online newspaper. The comments have been annotated by 4 or more …
This dataset contains 7984 user comments from an Austrian online newspaper. The comments have been annotated by 4 or more …
JFLEG is for developing and evaluating grammatical error correction (GEC). Unlike other corpora, it represents a broad range of language …
MuCGEC is a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7,063 sentences collected from three …
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language
WI-LOCNESS is part of the Building Educational Applications 2019 Shared Task for Grammatical Error Correction. It consists of two datasets: …
The WebNLG corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in …
The AND Dataset contains 13700 handwritten samples and 15 corresponding expert examined features for each sample. The dataset is released …
CEDAR Signature is a database of off-line signatures for signature verification. Each of 55 individuals contributed 24 signatures thereby creating …
A corpus of Offensive Language and Hate Speech Detection for Danish This DKhate dataset contains 3600 comments from the web …
Hate Speech is commonly defined as any communication that disparages a person or a group on the basis of some …
Hate speech has become one of the most significant issues in modern society, with implications in both the online and …
Covers multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly …
The OLID is a hierarchical dataset to identify the type and the target of offensive texts in social media. The …
This is an abusive/offensive language detection dataset for Albanian. The data is formatted following the OffensEval convention. Data is from …
The Toxic Language Detection for Brazilian Portuguese (ToLD-Br) is a dataset with tweets in Brazilian Portuguese annotated according to different …
KanHope is a code mixed hope speech dataset for equality, diversity, and inclusion in Kannada, an under-resourced Dravidian language. The …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
This dataset consists of images and annotations in Bengali. The images are human annotated in Bengali by two adult native …
The COCO (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset. It is designed to …
COCO Captions contains over one and a half million captions describing over 330,000 images. For the training and validation images, …
Dataset contains 33,010 molecule-description pairs split into 80\%/10\%/10\% train/val/test splits. The goal of the task is to retrieve the relevant …
Automatic image captioning is the task of producing a natural-language utterance (usually a sentence) that correctly reflects the visual content …
FlickrStyle10K is collected and built on Flickr30K image caption dataset. The original FlickrStyle10K dataset has 10,000 pairs of images and …
IU X-ray (Demner-Fushman et al., 2016) is a set of chest X-ray images paired with their corresponding diagnostic reports. The …
We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
Object HalBench is a benchmark used to evaluate the performance of Language Models, particularly those that are multimodal (i.e., they …
Peir Gross (Jing et al., 2018) was collected with descriptions in the Gross sub-collection from PEIR digital library, resulting in …
SCICAP is a large-scale image captioning dataset that contains real-world scientific figures and captions. SCICAP was constructed using more than …
WHOOPS! Is a dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers …
ARKitScenes is an RGB-D dataset captured with the widely available Apple LiDAR scanner. Along with the per-frame raw data (Wide …
A binarized version of MNIST. Source: Binarized MNIST
The CIFAR-10 database (Canadian Institute For Advanced Research database) is a large collection of natural color images. It has a …
The CIFAR-100 dataset (Canadian Institute for Advanced Research, 100 classes) is a subset of the Tiny Images dataset and consists …
CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; …
CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels …
The CelebA-HQ dataset is a high-quality version of CelebA that consists of 30,000 images at 1024×1024 resolution. Source: [IntroVAE: Introspective …
Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense …
Flickr-Faces-HQ (FFHQ) consists of 70,000 high-quality PNG images at 1024×1024 resolution and contains considerable variation in terms of age, ethnicity …
Fashion-MNIST is a dataset comprising of 28×28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per …
The Large-scale Scene Understanding (LSUN) challenge aims to provide a different benchmark for large-scale scene classification and understanding. The LSUN …
The MNIST database (Modified National Institute of Standards and Technology database) is a large collection of handwritten digits. It has …
MetFaces is an image dataset of human faces extracted from works of art. The dataset consists of 1336 high-quality PNG …
Samples from NASA Perseverance and set of GAN generated synthetic images from Neural Mars.
The ObjectsRoom dataset is based on the MuJoCo environment used by the Generative Query Network [4] and is a multi-object …
RC-49 is a benchmark dataset for generating images conditional on a continuous scalar variable. It is made by rendering 49 …
The Replica Dataset is a dataset of high quality reconstructions of a variety of indoor spaces. Each reconstruction has clean …
This is a dataset of 306,006 galaxies whose coordinates are taken from the Sloan Digital Sky Survey Data Release 7 …
The STL-10 is an image dataset derived from ImageNet and popularly used to evaluate algorithms of unsupervised feature learning or …
A simulation-based dataset featuring 20,000 stack configurations composed of a variety of elementary geometric primitives richly annotated regarding semantics and …
The Stacked MNIST dataset is derived from the standard MNIST dataset with an increased number of discrete modes. 240,000 RGB …
The Stanford Cars dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. …
The Stanford Dogs dataset contains 20,580 images of 120 classes of dogs from around the world, which are divided into …
A Dense-text Image Benchmark to evaluate large generation model's ability on text generation.
Vision and Language Navigation in Continuous Environments (VLN-CE) is an instruction-guided navigation task with crowdsourced instructions, realistic environments, and unconstrained …
ViZDoom is an AI research platform based on the classical First Person Shooter game Doom. The most popular game mode …
WISE, the first benchmark specifically designed for World Knowledge-Informed Semantic Evaluation. WISE moves beyond simple word-pixel mapping by challenging models …
The COCO (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset. It is designed to …
FETA benchmark focuses on text-to-image and image-to-text retrieval in public car manuals and sales catalogue brochures. The FETA Car-Manuals dataset …
The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators. Source: [Guiding …
The Remote Sensing Image Captioning Dataset (RSICD) is a dataset for remote sensing image captioning task. It contains more than …
WHOOPS! Is a dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers …
The Belgian Statutory Article Retrieval Dataset (BSARD) is a French native corpus for studying statutory article retrieval. BSARD consists of …
CQADupStack is a benchmark dataset for community question-answering research. It contains threads from twelve StackExchange subforums, annotated with duplicate question …
The MS MARCO (Microsoft MAchine Reading Comprehension) is a collection of datasets focused on deep learning in search. The first …
The MSLR-WEB30K dataset consists of 30,000 search queries over the documents from search results. The data also contains the values …
MTEB is a benchmark that spans 8 embedding tasks covering a total of 56 datasets and 112 languages. The 8 …
This dataset evaluates instruction following ability of large language models. There are 500+ prompts with instructions such as "write an …
KUAKE Query Intent Classification, a dataset for intent classification, is used for the KUAKE-QIC task. Given the queries of search …
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks …
A labelled version of the ORCAS click-based dataset of Web queries, which provides 18 million connections to 10 million distinct …
A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets. …
The ATIS (Airline Travel Information Systems) is a dataset consisting of audio recordings and corresponding manual transcripts about humans asking …
Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents …
We collect utterances from the Chinese Artificial Intelligence Speakers (CAIS), and annotate them with slot tags and intent labels. The …
This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that …
The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art …
This project contains natural language data for human-robot interaction in home domain which we collected and annotated for evaluating NLU …
Dataset is constructed from single intent dataset ATIS. This is a publically available multi intent dataset, which can be downloaded …
Dataset is constructed from single intent dataset SNIPS. This is a publicly available multi intent dataset, which can be downloaded …
In the paper, to bridge the research gap, we propose a new and important task, Profile-based Spoken Language Understanding (ProSLU), …
The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of …
The ATIS (Airline Travel Information Systems) is a dataset consisting of audio recordings and corresponding manual transcripts about humans asking …
The PATIS is a Persian language dataset for intent detection and slot filling.
The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of …
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
UNSW-NB15 is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers. The dataset contains …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
Abstract GENeration DAtaset (AGENDA) is a dataset of knowledge graphs paired with scientific abstracts. The dataset consists of 40k paper …
ENT-DESC involves retrieving abundant knowledge of various types of main entities from a large knowledge graph (KG), which makes the …
EventNarrative is a knowledge graph-to-text dataset from publicly available open-world knowledge graphs. EventNarrative consists of approximately 230,000 graphs and their …
Adopts two subsets of Freebase (Bollacker et al., 2008) as Knowledge Bases to construct the PathQuestion (PQ) and the PathQuestion-Large …
The WebQuestions dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It …
WikiGraphs is a dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text …
OCR is inevitably linked to NLP since its final output is in text. Advances in document intelligence are driving the …
EPHOIE is a fully-annotated dataset which is the first Chinese benchmark for both text spotting and visual information extraction. EPHOIE …
The paper used 500 scanned Electronic Theses and Dissertation cover pages (i.e., front pages). The dataset contains several intermediate datasets, …
Kleister NDA is a dataset for Key Information Extraction (KIE). The dataset contains a mix of scanned and born-digital long …
Consists of a dataset with 1000 whole scanned receipt images and annotations for the competition on scanned receipts OCR and …
Paper: Improved automatic keyword extraction given more linguistic knowledge Doi: 10.3115/1119355.1119383
KP20k is a large-scale scholarly articles dataset with 528K articles for training, 20K articles for validation and 20K articles for …
KPTimes is a large-scale dataset of news texts paired with editor-curated keyphrases. Source: [KPTimes: A Large-Scale Dataset for Keyphrase Generation …
A dataset for benchmarking keyphrase extraction and generation techniques from long document English scientific papers. The dataset has high quality …
The dataset was constructed by first finding suitable publications and then collecting keyphrases from manual annotators. Google SOAP API was …
Paper: Improved automatic keyword extraction given more linguistic knowledge Doi: 10.3115/1119355.1119383
We describe the SemEval task of extracting keyphrases and relations between them from scientific documents, which is crucial for understanding …
A diverse set of 21 relations, each covering a different set of subject-entities and a complete list of ground truth …
The CIFAR-100 dataset (Canadian Institute for Advanced Research, 100 classes) is a subset of the Tiny Images dataset and consists …
The COCO (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset. It is designed to …
Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense …
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the …
KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile …
The PASCAL Visual Object Classes (VOC) 2012 dataset contains 20 object categories including vehicles, household, animals, and other: aeroplane, bicycle, …
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used …
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used …
DPB-5L is a Multilingual KG dataset containing 5 KGs in English, French, Japanese, Greek, and Spanish. The dataset is used …
FB15k-237 is a link prediction dataset created from FB15k. While FB15k consists of 1,345 relations, 14,951 entities, and 592,213 triples, …
WN18RR is a link prediction dataset created from WN18, which is a subset of WordNet. WN18 consists of 18 relations …
Automatic language identification is a challenging problem. Discriminating between closely related languages is especially difficult. This paper presents a machine-learning …
OpenSubtitles is collection of multilingual parallel corpora. The dataset is compiled from a large database of movie and TV subtitles …
The Universal Dependencies (UD) project seeks to develop cross-linguistically consistent treebank annotation of morphology and syntax for multiple languages. The …
VoxForge is an open speech dataset that was set up to collect transcribed speech for use with Free and Open …
2000 HUB5 English Evaluation Transcripts was developed by the Linguistic Data Consortium (LDC) and consists of transcripts of 40 English …
Arxiv HEP-TH (high energy physics theory) citation graph is from the e-print arXiv and covers all the citations within a …
The Books3 dataset emerged as part of a broader effort to train AI models for natural language understanding and generation. …
C4 is a colossal, cleaned version of Common Crawl's web crawl corpus. It was based on Common Crawl dataset: https://commoncrawl.org. …
The Curation Corpus is a collection of 40,000 professionally-written summaries of news articles, with links to the articles themselves. Source: …
Free Law Project is a leading nonprofit organization that aims to make the legal ecosystem more equitable and competitive through …
The Hutter Prize Wikipedia dataset, also known as enwiki8, is a byte-level dataset consisting of the first 100 million bytes …
The LAMBADA (LAnguage Modeling Broadened to Account for Discourse Aspects) benchmark is an open-ended cloze task which consists of about …
OpenWebText is an open-source recreation of the WebText corpus. The text is web content extracted from URLs shared on Reddit …
PhilPapers is a remarkable resource for the philosophical community. Let me break it down for you: 1. PhilPapers: It's an …
A collection of 385,705 scientific abstracts about Cognitive Control and their GPT-3 embeddings.
The SALMon dataset and benchmark was introduced in the paper "A Suite for Acoustic Language Model Evaluation", with the goal …
The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets …
We introduced a Vietnamese speech recognition dataset in the medical domain comprising 16h of labeled medical speech, 1000h of unlabeled …
A new multilingual language model benchmark that is composed of 40+ languages spanning several scripts and linguistic families containing round …
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good …
The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good …
This is the Big-Bench version of our language-based movie recommendation dataset https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/movie_recommendation GPT-2 has a 48.8% accuracy, chance is 25%.
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
The Corpus of Linguistic Acceptability (CoLA) consists of 10657 sentences from 23 linguistics publications, expertly annotated for acceptability (grammaticality) by …
DaLAJ 1.0, a dataset for Linguistic Acceptability Judgments for Swedish, comprising 9,596 sentences in its first version; and the initial …
ItaCoLA is a corpus for monolingual and cross-lingual acceptability judgments which contains almost 10,000 sentences with acceptability judgments.
The Russian Corpus of Linguistic Acceptability (RuCoLA) is built from the ground up under the well-established binary LA approach. RuCoLA …
The ACM dataset contains papers published in KDD, SIGMOD, SIGCOMM, MobiCOMM, and VLDB and are divided into three classes (Database, …
The AbstRCT dataset consists of randomized controlled trials retrieved from the MEDLINE database via PubMed search. The trials are annotated …
The Aristo Tuple KB contains a collection of high-precision, domain-targeted (subject,relation,object) tuples extracted from text using a high-precision extraction pipeline, …
The Cornell eRulemaking Corpus – CDCP is an argument mining corpus annotated with argumentative structure information capturing the evaluability of …
COLLAB is a scientific collaboration dataset. A graph corresponds to a researcher’s ego network, i.e., the researcher and its collaborators …
The CiteSeer dataset consists of 3312 scientific publications classified into one of six classes. The citation network consists of 4732 …
CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph …
CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph …
CoDEx comprises a set of knowledge graph completion datasets extracted from Wikidata and Wikipedia that improve upon existing knowledge graph …
The Cora dataset consists of 2708 scientific publications classified into one of seven classes. The citation network consists of 5429 …
The DBLP is a citation network dataset. The citation data is extracted from DBLP, ACM, MAG (Microsoft Academic Graph), and …
The Dr. Inventor Multi-Layer Scientific Corpus (DRI Corpus) includes 40 Computer Graphics papers, selected by domain experts. Each paper of …
Bio-decagon is a dataset for polypharmacy side effect identification problem framed as a multirelational link prediction problem in a two-layer …
We release Douban Conversation Corpus, comprising a training data set, a development set and a test set for retrieval based …
The FB15k dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of …
FB15k-237 is a link prediction dataset created from FB15k. While FB15k consists of 1,345 relations, 14,951 entities, and 592,213 triples, …
The GDELT Project is a remarkable initiative that monitors our world by analyzing global news from various sources. Here are …
GO21 is a biomedical knowledge graph that models genes, proteins, drugs, and the hierarchy of the biological processes they participate …
KG20C is a Knowledge Graph about high quality papers from 20 top computer science Conferences. It can serve as a …
NELL-995 KG Completion Dataset
protein roles—in terms of their cellular functions from gene ontology—in various protein-protein interaction (PPI) graphs, with each graph corresponding to …
The PubMed dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. …
SINS is a database of continuous real-life audio recordings in a home environment. The home is a vacation home and …
This is a benchmark set for Traveling salesman problem (TSP) with characteristics that are different from the existing benchmark sets. …
The Unified Medical Language System (UMLS) is a comprehensive resource that integrates and disseminates essential terminology, classification standards, and coding …
The WN18 dataset has 18 relations scraped from WordNet for roughly 41,000 synsets, resulting in 141,442 triplets. It was found …
WN18RR is a link prediction dataset created from WN18, which is a subset of WordNet. WN18 consists of 18 relations …
Wikidata5m is a million-scale knowledge graph dataset with aligned corpus. This dataset integrates the Wikidata knowledge graph and Wikipedia pages. …
YAGO3-10 is benchmark dataset for knowledge base completion. It is a subset of YAGO3 (which itself is an extension of …
The Yelp Dataset is a valuable resource for academic research, teaching, and learning. It provides a rich collection of real-world …
Although large language models (LLMs) demonstrate impressive performance for many language tasks, most of them can only handle texts a …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
We introduce the MultiModal Needle-in-a-haystack (MMNeedle) benchmark, specifically designed to assess the long-context capabilities of MLLMs. Besides multi-image input, we …
ACES a dataset consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based …
The Alexa Point of View dataset is point of view conversion dataset, a parallel corpus of messages spoken to a …
FLoRes-200 doubles the existing language coverage of FLoRes-101. Given the nature of the new languages, which have less standardization and …
Itihasa is a large-scale corpus for Sanskrit to English translation containing 93,000 pairs of Sanskrit shlokas and their English translations. …
OpenSubtitles is collection of multilingual parallel corpora. The dataset is compiled from a large database of movie and TV subtitles …
The goal of ARQMath is to advance techniques for mathematical information retrieval, in particular, retrieving answers to mathematical questions (Task …
GeoQA is a dataset for automatic geometric problem solving containing 5,010 geometric problems with corresponding annotated programs, which illustrate the …
A new large scale plane geometry problem solving dataset called PGPS9K, labeled both fine-grained diagram annotation and interpretable solution program.
The AMI Meeting Corpus is a multi-modal data set comprising 100 hours of meeting recordings. It has been meticulously curated …
The Hateful Memes data set is a multimodal dataset for hateful meme detection (image + text) that contains 10,000+ new …
Introudced from Multimodal Meme Dataset (MultiOFF) for Identifying Offensive Content in Image and Text
Social media are interactive platforms that facilitate the creation or sharing of information, ideas or other forms of expression among …
A large, realistic multimodal dataset consisting of real personal photos and crowd-sourced questions/answers. Source: MemexQA: Visual Memex Question Answering
The Universal Morphology (UniMorph) project is a collaborative effort to improve how NLP handles complex morphology in the world’s languages. …
The dataset offers tag and mask annotations for image-text pairs from the CC3M validation set. Tag annotations denote words that …
This data is for the Mis2-KDD 2021 under review paper: Dataset of Propaganda Techniques of the State-Sponsored Information Operation of …
The Medical Information Mart for Intensive Care III (MIMIC-III) dataset is a large, de-identified and publicly-available collection of medical records. …
The RCV1 dataset is a benchmark dataset on text categorization. It is a collection of newswire articles producd by Reuters …
The Reuters-21578 dataset is a collection of documents with news articles. The original corpus has 10,369 documents and a vocabulary …
This dataset is for evaluating the task of Black-box Multi-agent Integration which focuses on combining the capabilities of multiple black-box …
Don’t Patronize Me! (DPM) is an annotated dataset with Patronizing and Condescending Language towards vulnerable communities.
Multi30K is a large-scale multilingual multimodal dataset for interdisciplinary machine learning research. It extends the Flickr30K dataset with German translations …
MultiSubs is a dataset of multilingual subtitles gathered from the OPUS OpenSubtitles dataset, which in turn was sourced from opensubtitles.org. …
ACE 2004 Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2004 Automatic …
ACE 2005 Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2005 Automatic …
Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. A significant …
Created by Smith et al. at 2008, the BioCreative II Gene Mention Recognition (BC2GM) Dataset contains data where participants are …
Introduced by Krallinger et al. in The CHEMDNER corpus of chemicals and drugs and its annotation principles BC4CHEMD is a …
BC5CDR corpus consists of 1500 PubMed articles with 4409 annotated chemicals, 5818 diseases and 3116 chemical-disease interactions. Source: https://www.ncbi.nlm.nih.gov/research/bionlp/Data/ Image …
BioRED is a first-of-its-kind biomedical relation extraction dataset with multiple entity types (e.g. gene/protein, disease, chemical) and relation pairs (e.g. …
Chinese Medical Named Entity Recognition, a dataset first released in CHIP20204, is used for CMeEE task. Given a pre-defined schema, …
We introduce FUNSD-r and CORD-r in Token Path Prediction, the revised VrD-NER datasets to reflect the real-world scenarios of NER …
CoNLL++ is a corrected version of the CoNLL03 NER dataset where 5.38% of the test sentences have been fixed. Source: …
A test dataset that annotated articles in 2020 following the CoNLL-2003 NER task.
The 'Deutsche Welle corpus for Information Extraction' (DWIE) is a multi-task dataset that combines four main Information Extraction (IE) annotation …
Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme. …
We introduce FUNSD-r and CORD-r in Token Path Prediction, the revised VrD-NER datasets to reflect the real-world scenarios of NER …
The first NER dataset in the field of traffic, which is to extract the characteristics and attributes of the vehicle …
The GENIA corpus is the primary collection of biomedical literature compiled and annotated within the scope of the GENIA project. …
This dataset releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 3 collapsed …
This dataset releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 11 tags.
JNLPBA is a biomedical dataset that comes from the GENIA version 3.02 corpus (Kim et al., 2003). It was created …
LINNAEUS is a general-purpose dictionary matching software, capable of processing multiple types of document formats in the biomedical domain (MEDLINE, …
The NCBI Disease corpus consists of 793 PubMed abstracts, which are separated into training (593), development (100) and test (100) …
Named Entity (NER) annotations of the Hebrew Treebank (Haaretz newspaper) corpus, including: morpheme and token level NER labels, nested mentions, …
OntoNotes 5.0 is a large corpus comprising various genres of text (news, conversational telephone speech, weblogs, usenet newsgroups, broadcast, talk …
Spoken Language Understanding Evaluation (SLUE) is a suite of benchmark tasks for spoken language understanding evaluation. It consists of limited-size …
SciERC dataset is a collection of 500 scientific abstract annotated with scientific entities, their relations, and coreference clusters. The abstracts …
Species-800 is a corpus for species entities, which is based on manually annotated abstracts. It comprises 800 PubMed abstracts that …
This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis …
The training and development dataset for our task was taken from previous work on wet lab corpus (Kulkarni et al., …
This dataset contains 1304 de-identified longitudinal medical records describing 296 patients.
BioNLI is a dataset in biomedical natural language inference. This dataset contains abstracts from biomedical literature and mechanistic premises generated …
The CommitmentBank is a corpus of 1,200 naturally occurring discourses whose final sentence contains a clause-embedding predicate under an entailment …
Natural Language Inference (NLI), also called Textual Entailment, is an important task in NLP with the goal of determining the …
The HANS (Heuristic Analysis for NLI Systems) dataset which contains many examples where the heuristics fail. Source: [Right for the …
JamPatoisNLI provides the first dataset for natural language inference in a creole language, Jamaican Patois. Many of the most-spoken low-resource …
KUAKE Query-Query Relevance, a dataset used to evaluate the relevance of the content expressed in two queries, is used for …
KUAKE Query Title Relevance, a dataset used to estimate the relevance of the title of a query document, is used …
LiDiRus is a diagnostic dataset that covers a large volume of linguistic phenomena, while allowing you to evaluate information systems …
MED is a new evaluation dataset that covers a wide range of monotonicity reasoning that was created by crowdsourcing and …
Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of 5,801 sentence pairs collected from newswire articles. Each pair is …
The MedNLI dataset consists of the sentence pairs developed by Physicians from the Past Medical History section of MIMIC-III clinical …
The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely …
This dataset tests the capabilities of language models to correctly capture the meaning of words denoting probabilities (WEP), e.g. words …
The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 …
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs, and each question pair is annotated with a binary …
The Russian Commitment Bank is a corpus of naturally occurring discourses whose final sentence contains a clause-embedding predicate under an …
The Recognizing Textual Entailment (RTE) datasets come from a series of textual entailment challenges. Data from RTE1, RTE2, RTE3 and …
The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of …
The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are …
The SciTail dataset is an entailment dataset created from multiple-choice science exams and web sentences. Each question and the correct …
Textual Entailment Recognition has been proposed recently as a generic task that captures major semantic inference needs across many NLP …
TabFact is a large-scale dataset which consists of 117,854 manually annotated statements with regard to 16,573 Wikipedia tables, their relations …
The WNLI dataset is a part of the GLUE benchmark used for Natural Language Inference (NLI). It contains pairs of …
XWINO is a multilingual collection of Winograd Schemas in six languages that can be used for evaluation of cross-lingual commonsense …
e-SNLI is used for various goals, such as obtaining full sentence justifications of a model's decisions, improving universal sentence representations …
Ego4D is a massive-scale egocentric video dataset and benchmark suite. It offers 3,025 hours of daily life activity video spanning …
General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and …
Legal General Language Understanding Evaluation (LexGLUE) benchmark is a collection of datasets for evaluating model performance across a diverse set …
STREUSLE stands for Supersense-Tagged Repository of English with a Unified Semantics for Lexical Expressions. The text is from the web …
ACE 2004 Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2004 Automatic …
ACE 2005 Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2005 Automatic …
BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese and German. In contrast to …
CaRB [Bhardwaj et al., 2019] is developed by re-annotating the dev and test splits of OIE2016 via crowd-sourcing. Besides improving …
LSOIE is a large-scale OpenIE data converted from QA-SRL 2.0 in two domains, i.e., Wikipedia and Science. It is 20 …
OIE2016 is the first large-scale OpenIE benchmark. It is created by automatic conversion from QA-SRL [He et al., 2015], a …
The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall …
We manually performed the task of Open Information Extraction on 5 short documents, elaborating tentative guidelines for the task, and …
The ATIS (Airline Travel Information Systems) is a dataset consisting of audio recordings and corresponding manual transcripts about humans asking …
Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents …
This dataset is for evaluating the performance of intent classification systems in the presence of "out-of-scope" queries, i.e., queries that …
The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of …
DuReader is a large-scale open-domain Chinese machine reading comprehension dataset. The dataset consists of 200K questions, 420K answers and 1M …
ELI5 is a dataset for long-form question answering. It contains 270K complex, diverse questions that require explanatory multi-sentence answers. Web …
The Natural Questions corpus is a question answering dataset containing 307,373 training examples, 7,830 development examples, and 7,842 test examples. …
SearchQA was built using an in-production, commercial search engine. It closely reflects the full pipeline of a (hypothetical) general question-answering …
The TextbookQuestionAnswering (TQA) dataset is drawn from middle school science curricula. It consists of 1,076 lessons from Life Science, Earth …
TriviaQA is a realistic text-based question answering dataset which includes 950K question-answer pairs from 662K documents collected from Wikipedia and …
The WebQuestions dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It …
Introduced by Singh, Sumeet S.. “Teaching Machines to Code: Neural Markup Generation with Visual Attention.” ArXiv abs/1802.05415 (2018): n. pag. …
A prebuilt dataset for OpenAI's task for image-2-latex system. Includes total of ~100k formulas and images splitted into train, validation …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
Paralex learns from a collection of 18 million question-paraphrase pairs scraped from WikiAnswers.
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs, and each question pair is annotated with a binary …
This is a paraphrasing dataset created using the adversarial paradigm. A task was designed called the Adversarial Paraphrasing Task (APT) …
Paraphrase and Semantic Similarity in Twitter (PIT) presents a constructed Twitter Paraphrase Corpus that contains 18,762 sentence pairs. Source: [SemEval-2015 …
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs, and each question pair is annotated with a binary …
Twitter News URL Corpus is a human-labeled paraphrase corpus to date of 51,524 sentence pairs and the first cross-domain benchmarking …
WikiHop is a multi-hop question-answering dataset. The query of WikiHop is constructed with entities and relations from WikiData, while supporting …
The Yelp Dataset is a valuable resource for academic research, teaching, and learning. It provides a rich collection of real-world …
Danish Dependency Treebank (DaNE) is a named entity annotation for the Danish Universal Dependencies treebank using the CoNLL-2003 annotation scheme. …
This dataset is for evaluation of morphosyntactic analyzers.
The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall …
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its …
XGLUE is an evaluation benchmark XGLUE,which is composed of 11 tasks that span 19 languages. For each task, the training …
The MS MARCO (Microsoft MAchine Reading Comprehension) is a collection of datasets focused on deep learning in search. The first …
The MS MARCO (Microsoft MAchine Reading Comprehension) is a collection of datasets focused on deep learning in search. The first …
We construct a dataset named CPED from 40 Chinese TV shows. CPED consists of multisource knowledge related to empathy and …
The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators. Source: [Guiding …
Visual Genome contains Visual Question Answering data in a multi-choice setting. It consists of 101,174 images from MSCOCO with 1.7 …
KP20k is a large-scale scholarly articles dataset with 528K articles for training, 20K articles for validation and 20K articles for …
KPTimes is a large-scale dataset of news texts paired with editor-curated keyphrases. Source: [KPTimes: A Large-Scale Dataset for Keyphrase Generation …
KP20k is a large-scale scholarly articles dataset with 528K articles for training, 20K articles for validation and 20K articles for …
KPTimes is a large-scale dataset of news texts paired with editor-curated keyphrases. Source: [KPTimes: A Large-Scale Dataset for Keyphrase Generation …
The Arabic dataset is scraped mainly from الموسوعة الشعرية and الديوان. After merging both, the total number of verses is …
A benchmark dataset that consists of 99,000+ sentences for Chinese polyphone disambiguation. Source: [g2pM: A Neural Grapheme-to-Phoneme Conversion Package for …
The Caltech101 dataset contains images from 101 object categories (e.g., “helicopter”, “elephant” and “chair” etc.) and a background category that …
The Describable Textures Dataset (DTD) contains 5640 texture images in the wild. They are annotated with human-centric attributes inspired by …
Eurosat is a dataset and deep learning benchmark for land use and land cover classification. The dataset is based on …
FGVC-Aircraft contains 10,200 images of aircraft, with 100 images for each of 102 different aircraft model variants, most of which …
The Food-101 dataset consists of 101 food categories with 750 training and 250 test images per category, making a total …
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the …
The ImageNet-A dataset consists of real-world, unmodified, and naturally occurring examples that are misclassified by ResNet models. Source: [On Robustness …
ImageNet-R(endition) contains art, cartoons, deviantart, graffiti, embroidery, graphics, origami, paintings, patterns, plastic objects, plush objects, sculptures, sketches, tattoos, toys, and …
Powered by the ImageNet dataset, unsupervised learning on large-scale data has made significant advances for classification tasks. There are two …
Oxford 102 Flower is an image classification dataset consisting of 102 flower categories. The flowers chosen to be flower commonly …
The Oxford-IIIT Pet Dataset has 37 categories with roughly 200 images for each class. The images have a large variations …
The Scene UNderstanding (SUN) database contains 899 categories and 130,519 images. There are 397 well-sampled categories to evaluate numerous state-of-the-art …
The Stanford Cars dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. …
UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These …
AviationQA is introduced in the paper titled- There is No Big Brother or Small Brother: Knowledge Infusion in Language Models …
BIG-Bench Hard (BBH) is a subset of the BIG-Bench, a diverse evaluation suite for language models. BBH focuses on a …
BLURB is a collection of resources for biomedical natural language processing. In general domains such as newswire and the Web, …
The Bamboogle dataset is a collection of questions that was constructed to investigate the ability of language models to perform …
BioASQ is a question answering dataset. Instances in the BioASQ dataset are composed of a question (Q), human-annotated answers (A), …
BoolQ is a question answering dataset for yes/no questions containing 15942 examples. These questions are naturally occurring – they are …
The COmmonsense Dataset Adversarially-authored by Humans (CODAH) is an evaluation set for commonsense question-answering in the sentence completion style of …
The Choice Of Plausible Alternatives (COPA) evaluation provides researchers with a tool for assessing progress in open-domain commonsense causal reasoning. …
CaseHOLD (Case Holdings On Legal Decisions) is a law dataset comprised of over 53,000+ multiple choice questions to identify the …
The dataset covers Hindi and Tamil, collected without the use of translation. It provides a realistic information-seeking task with questions …
CheGeKa is a Jeopardy!-like Russian QA dataset collected from the official Russian quiz database ChGK. Motivation The task can be …
Click to add a brief description of the dataset (Markdown and LaTeX enabled). Provide: * a high-level explanation of the …
CliCR is a new dataset for domain specific reading comprehension used to construct around 100,000 cloze queries from clinical case …
CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure …
A filtered version of CronQuestions and which can better demonstrate the model’s inference ability for complex temporal questions.
ComplexWebQuestions is a dataset for answering complex questions that require reasoning over multiple web snippets. It contains a large set …
ConditionalQA is a Question Answering (QA) dataset that contains complex questions with conditional answers, i.e. the answers are only applicable …
ConvFinQA is a dataset designed to study the chain of numerical reasoning in conversational question answering. The dataset contains 3892 …
CRONQUESTIONS, the Temporal KGQA dataset consists of two parts: a KG with temporal annotations, and a set of natural language …
Discrete Reasoning Over Paragraphs DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a …
DaNetQA is a question answering dataset for yes/no questions. These questions are naturally occurring ---they are generated in unprompted and …
DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in …
EgoTask QA benchmark contains 40K balanced question-answer pairs selected from 368K programmatically generated questions generated over 2K egocentric videos. It …
FEVER is a publicly available dataset for fact extraction and verification against textual sources. It consists of 185,445 claims manually …
A French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25,000+ …
FairytaleQA is a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Annotated by educational experts based on an …
FinQA is a new large-scale dataset with Question-Answering pairs over Financial reports, written by financial experts. The dataset contains 8,281 …
GraphQuestions is a characteristic-rich dataset designed for factoid question answering. The dataset aims to provide a systematic way of constructing …
HellaSwag is a challenge dataset for evaluating commonsense NLI that is specially hard for state-of-the-art models, though its questions are …
HotpotQA is a question answering dataset collected on the English Wikipedia, containing about 113K crowd-sourced questions that are constructed to …
A new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and …
JaQuAD (Japanese Question Answering Dataset) is a question answering dataset in Japanese that consists of 39,696 extractive question-answer pairs on …
A large-scale dataset for Complex KBQA. Source: [KQA Pro: A Large-Scale Dataset with Interpretable Programs and Accurate SPARQLs for Complex …
MMLU (Massive Multitask Language Understanding) is a new benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively …
The MRQA (Machine Reading for Question Answering) dataset is a dataset for evaluating the generalization capabilities of reading comprehension systems. …
The MS MARCO (Microsoft MAchine Reading Comprehension) is a collection of datasets focused on deep learning in search. The first …
MapEval-Textual contains 300 question-answer pairs. The task is to answer question by fetching necessary informations using external Map APIs.
MapEval-Textual contains 300 context-question-answer triplets. The necessary geo-spatial information is provided in the context. The task is to answer question …
This dataset code generates mathematical question and answer pairs, from a range of question types at roughly school-level difficulty. This …
Multiple choice question answering based on the United States Medical License Exams (USMLE). The dataset is collected from the professional …
The MetaQA dataset consists of a movie ontology derived from the WikiMovies Dataset and three sets of question-answer pairs written …
A machine reading comprehension (MRC) dataset with discourse structure built over multiparty dialog. Molweni's source samples from the Ubuntu Chat …
MultiQ is a multi-hop QA dataset for Russian, suitable for general open-domain question answering, information retrieval, and reading comprehension tasks. …
MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions, i.e., questions that can be answered by …
MULTITQ is a large-scale dataset featuring ample relevant facts and multiple temporal granularities.
NExT-QA is a VideoQA benchmark targeting the explanation of video contents. It challenges QA models to reason about the causal …
The NarrativeQA dataset includes a list of documents with Wikipedia summaries, links to full stories, and questions and answers. Source: …
The Natural Questions corpus is a question answering dataset containing 307,373 training examples, 7,830 development examples, and 7,842 test examples. …
The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs. * Documents are CNN news articles. …
The Open Table-and-Text Question Answering (OTT-QA) dataset contains open questions which require retrieving tables and text from the web to …
OpenBookQA is a new kind of question-answering dataset modeled after open book exams for assessing human understanding of a subject. …
PIQA is a dataset for commonsense reasoning, and was created to investigate the physical knowledge of existing models in NLP. …
We present PeerQA, a real-world, scientific, document-level Question Answering (QA) dataset. PeerQA questions have been sourced from peer reviews, which …
PopQA is an open-domain QA dataset with 14k QA pairs with fine-grained Wikidata entity ID, Wikipedia page views, and relationship …
PubChemQA consists of molecules and their corresponding textual descriptions from PubChem. It contains a single type of question, i.e., please …
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary …
QASPER is a dataset for question answering on scientific research papers. It consists of 5,049 questions over 1,585 Natural Language …
Question Answering in Context is a large-scale dataset that consists of around 14K crowdsourced Question Answering dialogs with 98K question-answer …
QuALITY (Question Answering with Long Input Texts, Yes!) is a multiple-choice question answering dataset for long document comprehension. The dataset …
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs, and each question pair is annotated with a binary …
The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 …
Logical reasoning is an important ability to examine, analyze, and critically evaluate arguments as they occur in ordinary language as …
RecipeQA is a dataset for multimodal comprehension of cooking recipes. It consists of over 36K question-answer pairs automatically generated from …
RuOpenBookQA is a QA dataset with multiple-choice elementary-level science questions which probe the understanding of core science facts. Motivation RuOpenBookQA …
SCDE is a human-created sentence cloze dataset, collected from public school English examinations in China. The task requires a model …
Social Interaction QA (SIQA) is a question-answering benchmark for testing social commonsense intelligence. Contrary to many prior benchmarks that focus …
SQA3D is a dataset for embodied scene understanding, where an agent needs to comprehend the scene it situates from an …
The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct …
Given a partial description like "she opened the hood of the car," humans can reason about the situation and anticipate …
A large scale analogue of Stanford SQuAD in the Russian language - is a valuable resource that has not been …
The “Mental Health” forum was used, a forum dedicated to people suffering from schizophrenia and different mental disorders. Relevant posts …
SimpleQuestions is a large-scale factoid question answering dataset. It consists of 108,442 natural language questions, each paired with a corresponding …
A Benchmark for Robust Multi-Hop Spatial Reasoning in Texts
Representation and learning of commonsense knowledge is one of the foundational problems in the quest to enable deep language understanding. …
StrategyQA is a question answering benchmark where the required reasoning steps are implicit in the question, and should be inferred …
TAT-QA (Tabular And Textual dataset for Question Answering) is a large-scale QA dataset, aiming to stimulate progress of QA research …
Existing benchmarks for temporal QA focus on a single information source (either a KB or a text corpus), and include …
TempQA-WD is a benchmark dataset for temporal reasoning designed to encourage research in extending the present approaches to target a …
Here, we take a key step in this direction and release a new benchmark, TempQuestions, containing 1,271 questions, that are …
Question answering over knowledge graphs (KG-QA) is a vital topic in IR. Questions with temporal intent are a special class …
Torque is an English reading comprehension benchmark built on 3.2k news snippets with 21k human-generated questions querying temporal relationships. Source: …
Text Retrieval Conference Question Answering (TrecQA) is a dataset created from the TREC-8 (1999) to TREC-13 (2004) Question Answering tracks. …
TriviaQA is a realistic text-based question answering dataset which includes 950K question-answer pairs from 662K documents collected from Wikipedia and …
TruthfulQA is a benchmark to measure whether a language model is truthful in generating answers to questions. The benchmark comprises …
With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering …
UniProtQA consists of proteins and textual queries about their functions and properties. The dataset is constructed from UniProt, and consists …
The WebQuestions dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It …
The WebQuestionsSP dataset is released as part of our ACL-2016 paper “The Value of Semantic Parse Labeling for Knowledge Base …
WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K …
WikiHop is a multi-hop question-answering dataset. The query of WikiHop is constructed with entities and relations from WikiData, while supporting …
The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain …
WikiSQL consists of a corpus of 87,726 hand-annotated SQL query and natural language question pairs. These SQL queries are further …
WikiTableQuestions is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was …
We aim to improve the bAbI benchmark as a means of developing intelligent dialogue agents. To this end, we propose …
We aim to improve the bAbI benchmark as a means of developing intelligent dialogue agents. To this end, we propose …
FairytaleQA is a dataset focusing on narrative comprehension of kindergarten to eighth-grade students. Annotated by educational experts based on an …
The Natural Questions corpus is a question answering dataset containing 307,373 training examples, 7,830 development examples, and 7,842 test examples. …
The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct …
TriviaQA is a realistic text-based question answering dataset which includes 950K question-answer pairs from 662K documents collected from Wikipedia and …
QC-Science contains 47832 question-answer pairs belonging to the science domain tagged with labels of the form subject - chapter - …
We have created three new Reading Comprehension datasets constructed using an adversarial model-in-the-loop. We use three different models; BiDAF (Seo …
We present a reading comprehension challenge in which questions can only be answered by taking into account information from multiple …
The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 …
Tasks Our shared task has three subtasks. Subtask 1 and 2 focus on evaluating machine learning models' performance with regard …
Logical reasoning is an important ability to examine, analyze, and critically evaluate arguments as they occur in ordinary language as …
ROOR is a reading order prediction (ROP) benchmark which annotates layout reading order as ordering relations. Layout reading order is …
ReadingBank is a benchmark dataset for reading order detection built with weak supervision from WORD documents, which contains 500K document …
EmoCause is a dataset of annotated emotion cause words in emotional situations from the EmpatheticDialogues valid and test set. The …
RECCON is a dataset for the task of recognizing emotion cause in conversations. Source: Recognizing Emotion Cause in Conversations
The Iris flower data set or Fisher's Iris data set is a multivariate data set introduced by the British statistician, …
The AbstRCT dataset consists of randomized controlled trials retrieved from the MEDLINE database via PubMed search. The trials are annotated …
The Cornell eRulemaking Corpus – CDCP is an argument mining corpus annotated with argumentative structure information capturing the evaluability of …
The Dr. Inventor Multi-Layer Scientific Corpus (DRI Corpus) includes 40 Computer Graphics papers, selected by domain experts. Each paper of …
The Discovery datasets consists of adjacent sentence pairs (s1,s2) with a discourse marker (y) that occurred at the beginning of …
The FewRel (Few-Shot Relation Classification Dataset) contains 100 relations and 70,000 instances from Wikipedia. The dataset is divided into three …
TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used …
2010 i2b2/VA is a biomedical dataset for relation classification and entity typing.
The Sixth Informatics for Integrating Biology and the Bedside (i2b2) Natural Language Processing Challenge for Clinical Records focused on the …
ACE 2004 Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2004 Automatic …
ACE 2005 Multilingual Training Corpus contains the complete set of English, Arabic and Chinese training data for the 2005 Automatic …
Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. A significant …
BioRED is a first-of-its-kind biomedical relation extraction dataset with multiple entity types (e.g. gene/protein, disease, chemical) and relation pairs (e.g. …
The BioCreative V CDR task corpus is manually annotated for chemicals, diseases and chemical-induced disease (CID) relations. It contains the …
ChemProt consists of 1,820 PubMed abstracts with chemical-protein interactions annotated by domain experts and was used in the BioCreative VI …
The CoNLL04 dataset is a benchmark dataset used for relation extraction tasks. It contains 1,437 sentences, each of which has …
The DDIExtraction 2013 task relies on the DDI corpus which contains MedLine abstracts on drug-drug interactions as well as documents …
The 'Deutsche Welle corpus for Information Extraction' (DWIE) is a multi-task dataset that combines four main Information Extraction (IE) annotation …
This is the dataset used for classifying Gene-Disease relationship types from sentences. The dataset consists of 3 files: * manually_annotated_set.xlsx …
DocRED (Document-Level Relation Extraction Dataset) is a relation extraction dataset constructed from Wikipedia and Wikidata. Each document in the dataset …
Form Understanding in Noisy Scanned Documents (FUNSD) comprises 199 real, fully annotated, scanned forms. The documents are noisy and vary …
The FewRel (Few-Shot Relation Classification Dataset) contains 100 relations and 70,000 instances from Wikipedia. The dataset is divided into three …
GAD, or Gene Associations Database, is a corpus of gene-disease associations curated from genetic association studies.
The gene-disease associations corpus contains 30,192 titles and abstracts from PubMed articles that have been automatically labelled for genes, diseases …
JNLPBA is a biomedical dataset that comes from the GENIA version 3.02 corpus (Kim et al., 2003). It was created …
a dataset from A Hierarchical Framework for Relation Extraction with Reinforcement Learning
Preprocessed version of NYT11. Each relational triple is formatted as follows: rtext : relation type em1 : source entity mention …
Phenotype-Gene Relations (PGR) is a corpus that consists of 1712 abstracts, 5676 human phenotype annotations, 13835 gene annotations, and 4283 …
Wikipedia abstracts automatically annotated with WikiData entities and relations that are entailed by the text. Over 9 million triplets.
The Re-TACRED dataset is a significantly improved version of the TACRED dataset for relation extraction. Using new crowd-sourced labels, Re-TACRED …
SciERC dataset is a collection of 500 scientific abstract annotated with scientific entities, their relations, and coreference clusters. The abstracts …
The dataset for the SemEval-2010 Task 8 is a dataset for multi-way classification of mutually exclusive semantic relations between pairs …
TACRED is a large-scale relation extraction dataset with 106,264 examples built over newswire and web text from the corpus used …
The TACRED-Revisited dataset improves the crowd-sourced TACRED dataset for relation extraction by relabeling the dev and test sets using expert …
The training and development dataset for our task was taken from previous work on wet lab corpus (Kulkarni et al., …
The WebNLG corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in …
It contains about 28K medium quality animal images belonging to 10 categories: dog, cat, horse, spyder, butterfly, chicken, sheep, cow, …
SciDocs evaluation framework consists of a suite of evaluation tasks designed for document-level tasks. Source: Allen Institute for AI
ArgSciChat is an argumentative dialogue dataset. It consists of 498 messages collected from 41 dialogues on 20 scientific papers. It …
The main goal of the data collection is to acquire highly natural conversations that cover a wide variety of styles …
Next generation task-oriented dialog systems need to understand conversational contexts with their perceived surroundings, to effectively help users in the …
HotpotQA is a question answering dataset collected on the English Wikipedia, containing about 113K crowd-sourced questions that are constructed to …
In this project, we introduce InfoSeek, a visual question answering dataset tailored for information-seeking questions that cannot be answered with …
The dataset contains single-shot videos taken from moving cameras in underwater environments. The first shard of a new Marine Video …
The Natural Questions corpus is a question answering dataset containing 307,373 training examples, 7,830 development examples, and 7,842 test examples. …
Outside Knowledge Visual Question Answering (OK-VQA) includes more than 14,000 questions that require external knowledge to answer. Source: [OK-VQA: A …
This dataset contains 21,889 outfits from polyvore.com, in which 17,316 are for training, 1,497 for validation and 3,076 for testing. …
The task of PubMedQA is to answer research questions with yes/no/maybe (e.g.: Do preoperative statins reduce atrial fibrillation after coronary …
PubMedQA-MetaGen: Metadata-Enriched PubMedQA Corpus Dataset Summary PubMedQA-MetaGen is a metadata-enriched version of the PubMedQA biomedical question-answering dataset, created using the …
Quora Question Pairs (QQP) dataset consists of over 400,000 question pairs, and each question pair is annotated with a binary …
The ToolLens dataset consists of 18,770 concise yet intentionally multifaceted queries, each associated with 1 to 3 verified tools out …
A dataset for evaluate system's understanding of given passages.
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
The expansion of social networks has accelerated the transmission of information and news at every communities. Over the past few …
MUStARD++ is a multimodal sarcasm detection dataset (MUStARD) pre-annotated with 9 emotions. It can be used for the task of …
This dataset is an extension of MASAC, a multimodal, multi-party, Hindi-English code-mixed dialogue dataset compiled from the popular Indian TV …
iSarcasm is a dataset of tweets, each labelled as either sarcastic or non_sarcastic. Each sarcastic tweet is further labelled for …
The ATIS (Airline Travel Information Systems) is a dataset consisting of audio recordings and corresponding manual transcripts about humans asking …
A large and realistic natural language question answering dataset. Source: Measuring Compositional Generalization: A Comprehensive Method on Realistic Data
GraphQuestions is a characteristic-rich dataset designed for factoid question answering. The dataset aims to provide a systematic way of constructing …
SParC is a large-scale dataset for complex, cross-domain, and context-dependent (multi-turn) semantic parsing and text-to-SQL task (interactive natural language interfaces …
The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has …
The WebQuestionsSP dataset is released as part of our ACL-2016 paper “The Value of Semantic Parse Labeling for Knowledge Base …
WikiSQL consists of a corpus of 87,726 hand-annotated SQL query and natural language question pairs. These SQL queries are further …
WikiTableQuestions is a question answering dataset over semi-structured tables. It is comprised of question-answer pairs on HTML tables, and was …
A new shared task of semantic retrieval from legal texts, in which a so-called contract discovery is to be performed, …
The task builds on the CoNLL-2008 task and extends it to multiple languages. The core of the task is to …
The BIOSSES data set comprises total 100 sentence pairs all of which were selected from the "[TAC2 Biomedical Summarization Track …
CHIP Semantic Textual Similarity, a dataset for sentence similarity in the non-i.i.d. (non-independent and identically distributed) setting, is used for …
The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of …
Crisscrossed Captions (CxC) contains 247,315 human-labeled annotations including positive and negative associations between image pairs, caption pairs and image-caption pairs. …
Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of 5,801 sentence pairs collected from newswire articles. Each pair is …
MTEB is a benchmark that spans 8 embedding tasks covering a total of 56 datasets and 112 languages. The 8 …
The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of …
STS Benchmark comprises a selection of the English datasets used in the STS tasks organized in the context of SemEval …
SentEval is a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary …
EC-FUNSD is introduced in [arXiv:2402.02379] as a benchmark of semantic entity recognition (SER) and entity linking (EL), designed for the …
Form Understanding in Noisy Scanned Documents (FUNSD) comprises 199 real, fully annotated, scanned forms. The documents are noisy and vary …
HellaSwag is a challenge dataset for evaluating commonsense NLI that is specially hard for state-of-the-art models, though its questions are …
EconLogicQA is a benchmark designed to test the sequential reasoning skills of large language models (LLMs) in economics, business, and …
This repository contains the code, data, and models of the paper titled "BᴀɴɢʟᴀBᴏᴏᴋ: A Large-scale Bangla Dataset for Sentiment Analysis …
The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels. It is greatly …
DynaSent is an English-language benchmark task for ternary (positive/negative/neutral) sentiment analysis. DynaSent combines naturally occurring sentences with sentences created using …
The Hotel Arabic-Reviews Dataset (HARD) contains 93700 hotel reviews in Arabic language. The hotel reviews were collected from Booking.com website …
The IMDb Movie Reviews dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database …
MR Movie Reviews is a dataset for use in sentiment-analysis experiments. Available are collections of movie-review documents labeled with respect …
Spoken Language Understanding Evaluation (SLUE) is a suite of benchmark tasks for spoken language understanding evaluation. It consists of limited-size …
SST-5 is the Stanford Sentiment Treebank 5-way classification dataset (positive, somewhat positive, neutral, somewhat negative, negative). To create SST-3 (positive, …
This is a dataset for 3-way sentiment classification of reviews (negative, neutral, positive). It is a merge of [Stanford Sentiment …
TweetEval introduces an evaluation framework consisting of seven heterogeneous Twitter-specific classification tasks. Source: [TweetEval: Unified Benchmark and Comparative Evaluation for …
The ATIS (Airline Travel Information Systems) is a dataset consisting of audio recordings and corresponding manual transcripts about humans asking …
We collect utterances from the Chinese Artificial Intelligence Speakers (CAIS), and annotate them with slot tags and intent labels. The …
The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art …
MASSIVE is a parallel dataset of > 1M utterances across 51 languages with annotations for the Natural Language Understanding tasks …
Dataset is constructed from single intent dataset ATIS. This is a publically available multi intent dataset, which can be downloaded …
Dataset is constructed from single intent dataset SNIPS. This is a publicly available multi intent dataset, which can be downloaded …
This dataset contains 21,889 outfits from polyvore.com, in which 17,316 are for training, 1,497 for validation and 3,076 for testing. …
In the paper, to bridge the research gap, we propose a new and important task, Profile-based Spoken Language Understanding (ProSLU), …
A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets. …
The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
CoDesc is a large dataset of 4.2m Java source code and parallel data of their description from code search, and …
The CodeSearchNet Corpus is a large dataset of functions with associated documentation written in Go, Java, JavaScript, PHP, Python, and …
The Java dataset introduced in DeepCom (Deep Code Comment Generation), commonly used to evaluate automated code summarization.
The Java dataset introduced in Hybrid-DeepCom (Deep code comment generation with hybrid lexical and syntactical information), commonly used to evaluate …
The Python dataset introduced in the Parallel Corpus paper ([A Parallel Corpus of Python Functions and Documentation Strings for Automated …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
This dataset encompasses 265 speeches (over 200,000 tokens) from the German Bundestag, primarily from the 19th legislative term (2017-2021), given …
MuST-C currently represents the largest publicly available multilingual corpus (one-to-many) for speech translation. It covers eight language directions, from English …
The AI2’s Reasoning Challenge (ARC) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to …
Climate change poses critical challenges globally, disproportionately affecting low-income countries that often lack resources and linguistic representation on the international …
FNC-1 was designed as a stance detection dataset and it contains 75,385 labeled headline and article pairs. The pairs are …
MGTAB is the first standardized graph-based benchmark for stance and bot detection. MGTAB contains 10,199 expert-annotated users and 7 types …
P-Stance: A Large Dataset for Stance Detection in Political Domain 2021
Perspectrum is a dataset of claims, perspectives and evidence, making use of online debate websites to create the initial data …
Includes Russian tweets and news comments from multiple sources, covering multiple stories, as well as text classification approaches to stance …
Fact-checking (FC) articles which contains pairs (multimodal tweet and a FC-article) from snopes.com. Source: [Where Are the Facts? Searching for …
VAST consists of a large range of topics covering broad themes, such as politics (e.g., ‘a Palestinian state’), education (e.g., …
CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs …
WritingPrompts is a large dataset of 300K human-written stories paired with writing prompts from an online forum. Source: [Hierarchical Neural …
Czech subjectivity dataset of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description …
Available are collections of movie-review documents labeled with respect to their overall sentiment polarity (positive or negative) or subjective rating …
TabFact is a large-scale dataset which consists of 117,854 manually annotated statements with regard to 16,573 Wikipedia tables, their relations …
DART is a large dataset for open-domain structured data record to text generation. DART consists of 82,191 examples across different …
End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from …
This dataset gathers 728,321 biographies from English Wikipedia. It aims at evaluating text generation algorithms. For each article, we provide …
This dataset gathers 428,748 person and 12,236 animal infobox with descriptions based on Wikipedia dump (2018/04/01) and Wikidata (2018/04/12).
The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. …
Within the SemEval-2013 evaluation exercise, the TempEval-3 shared task aims to advance research on temporal information processing. It follows on …
A temporal counterfactual dataset composing of 1000 short and natural video-caption pairs.
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
AG News (AG’s News Corpus) is a subdataset of AG's corpus of news articles constructed by assembling titles and description …
Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. A significant …
In NLP, text classification is one of the primary problems we try to solve and its uses in language analyses …
Arxiv HEP-TH (high energy physics theory) citation graph is from the e-print arXiv and covers all the citations within a …
Dataset composed of online banking queries annotated with their corresponding intents. BANKING77 dataset provides a very fine-grained set of intents …
BLURB is a collection of resources for biomedical natural language processing. In general domains such as newswire and the Web, …
The Balanced Choice of Plausible Alternatives dataset is a benchmark for training machine learning models that are robust to superficial …
DBpedia (from "DB" for "database") is a project aiming to extract structured content from the information created in the Wikipedia …
Covers multiple aspects of the issue. Each post in the dataset is annotated from three different perspectives: the basic, commonly …
The IMDb Movie Reviews dataset is a binary sentiment analysis dataset consisting of 50,000 reviews from the Internet Movie Database …
LoT-insts contains over 25k classes whose frequencies are naturally long-tail distributed. Its test set from four different subsets: many-, medium-, …
MR Movie Reviews is a dataset for use in sentiment-analysis experiments. Available are collections of movie-review documents labeled with respect …
MTEB is a benchmark that spans 8 embedding tasks covering a total of 56 datasets and 112 languages. The 8 …
Ohsumed includes medical abstracts from the MeSH categories of the year 1991. In [Joachims, 1997] were used the first 20,000 …
The Overruling dataset is a law dataset corresponding to the task of determining when a sentence is overruling a prior …
The RCV1 dataset is a benchmark dataset on text categorization. It is a collection of newswire articles producd by Reuters …
The Sequence labellIng evaLuatIon benChmark fOr spoken laNguagE (SILICONE) benchmark is a collection of resources for training, evaluating, and analyzing …
The Stanford Sentiment Treebank is a corpus with fully labeled parse trees that allows for a complete analysis of the …
Data set constructed from YouTube comments (72,098 comments posted by 43,859 users on 623 relevant videos to the crisis)
A question type classification dataset with 6 classes for questions about a person, location, numeric information, etc. The test split …
The Terms of Service dataset is a law dataset corresponding to the task of identifying whether contractual terms are potentially …
We introduce a large semi-automatically generated dataset of ~400,000 descriptive sentences about commonsense knowledge that can be true or false …
Education is increasingly data-driven, and the ability to analyse and adapt educational materials quickly and effectively is important for keeping …
Briefly describe the dataset. Provide: * a high-level explanation of the dataset characteristics * explain motivations and summary of its …
The Yahoo! Answers topic classification dataset is constructed using 10 largest main categories. Each class contains 140,000 training samples and …
Benchmark dataset for abstracts and titles of 100,000 ArXiv scientific papers. This dataset contains 10 classes and is balanced (exactly …
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
MTEB is a benchmark that spans 8 embedding tasks covering a total of 56 datasets and 112 languages. The 8 …
CNN/Daily Mail is a dataset for text summarization. Human generated abstractive summary bullets were generated from news stories in CNN …
COCO Captions contains over one and a half million captions describing over 330,000 images. For the training and validation images, …
CSL is a synthetic dataset introduced in Murphy et al. (2019) to test the expressivity of GNNs. In particular, graphs …
CommonGen is constructed through a combination of crowdsourced and existing caption corpora, consists of 79k commonsense descriptions over 35k unique …
Czech restaurant information is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It …
DART is a large dataset for open-domain structured data record to text generation. DART consists of 82,191 examples across different …
DailyDialog is a high-quality multi-turn open-domain English dialog dataset. It contains 13,118 dialogues split into a training set with 11,118 …
Paper | Github | Dataset| Model As a part of our research efforts toward making LLMs more safe for public …
LCSTS is a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which …
OpenWebText is an open-source recreation of the WebText corpus. The text is web content extracted from URLs shared on Reddit …
ROCStories is a collection of commonsense short stories. The corpus consists of 100,000 five-sentence stories. Each story logically follows everyday …
ReDial (Recommendation Dialogues) is an annotated dataset of dialogues, where users recommend movies to each other. The dataset consists of …
The SciQ dataset contains 13,679 crowdsourced science exam questions about Physics, Chemistry and Biology, among others. The questions are in …
ASSET is a new dataset for assessing sentence simplification in English. ASSET is a crowdsourced multi-reference corpus where each simplification …
The Newsela dataset was introduced by Xu et al. in their research on text simplification. It is a corpus that …
TurkCorpus, a dataset with 2,359 original sentences from English Wikipedia, each with 8 manual reference simplifications. The dataset is divided …
Aci-bench: a Novel Ambient Clinical Intelligence Dataset for Benchmarking Automatic Visit Note Generation
Arxiv HEP-TH (high energy physics theory) citation graph is from the e-print arXiv and covers all the citations within a …
Consists of 1.3 million records of U.S. patent documents along with human written abstractive summaries. Source: [BIGPATENT: A Large-Scale Dataset …
BillSum is the first dataset for summarization of US Congressional and California state bills. The BillSum dataset consists of three …
BookSum is a collection of datasets for long-form narrative summarization. This dataset covers source documents from the literature domain, such …
DialogSum is a large-scale dialogue summarization dataset, consisting of 13,460 dialogues with corresponding manually labeled summaries and topics. This work …
Gazeta is a dataset for automatic summarization of Russian news. The dataset consists of 63,435 text-summary pairs. To form training, …
GovReport is a dataset for long document summarization, with significantly longer documents and summaries. It consists of reports written by …
The How2 dataset contains 13,500 videos, or 300 hours of speech, and is split into 185,187 training, 2022 development (dev), …
The dataset introduces document alignments between German Wikipedia and the children's lexicon Klexikon. The source texts in Wikipedia are both …
LCSTS is a large corpus of Chinese short text summarization dataset constructed from the Chinese microblogging website Sina Weibo, which …
MTEB is a benchmark that spans 8 embedding tasks covering a total of 56 datasets and 112 languages. The 8 …
MeQSum is a dataset for medical question summarization. It contains 1,000 summarized consumer health questions. Source: https://www.aclweb.org/anthology/P19-1215.pdf Image Source: https://www.aclweb.org/anthology/P19-1215.pdf
MeetingBank, a benchmark dataset created from the city councils of 6 major U.S. cities to supplement existing datasets. It contains …
Mental health remains a significant challenge of public health worldwide. With increasing popularity of online platforms, many use the platforms …
Source: BARThez: a Skilled Pretrained French Sequence-to-Sequence Model OrangeSum is a single-document extreme summarization dataset with two tasks: title and …
The PubMed dataset consists of 19717 scientific publications from PubMed database pertaining to diabetes classified into one of three classes. …
QMSum is a new human-annotated benchmark for query-based multi-domain meeting summarisation task, which consists of 1,808 query-summary pairs over 232 …
Reddit TIFU dataset is a newly collected Reddit dataset, where TIFU denotes the name of /r/tifu subbreddit. There are 122,933 …
A new dataset with abstractive dialogue summaries. Source: SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization
WikiHow is a dataset of more than 230,000 article and summary pairs extracted and constructed from an online knowledge base …
The Extreme Summarization (XSum) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create …
For nearly 30 years, ArXiv has served the public and research communities by providing open access to scholarly articles, from …
This is a dataset for evaluating summarisation methods for research papers. Source: [A Discourse-Aware Attention Model for Abstractive Summarization of …
BIRD (BIg Bench for LaRge-scale Database Grounded Text-to-SQL Evaluation) represents a pioneering, cross-domain dataset that examines the impact of extensive …
KaggleDBQA is a challenging cross-domain and complex evaluation dataset of real Web databases, with domain-specific data types, original formatting, and …
SEDE is a dataset comprised of 12,023 complex and diverse SQL queries and their natural language titles and descriptions, written …
SParC is a large-scale dataset for complex, cross-domain, and context-dependent (multi-turn) semantic parsing and text-to-SQL task (interactive natural language interfaces …
SQL-Eval is an open-source PostgreSQL evaluation dataset released by Defog, constructed based on Spider. The original link can be found …
Spider 2.0 is a comprehensive code generation agent task that includes 632 examples. The agent has to interactively explore various …
This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from …
Trinity Gesture Dataset includes 23 takes, totalling 244 minutes of motion capture and audio of a male native English speaker …
The COCO (Common Objects in Context) dataset is a large-scale object detection, segmentation, and captioning dataset. It is designed to …
A large dataset of color names and their respective RGB values stores in CSV.
Automatic image captioning is the task of producing a natural-language utterance (usually a sentence) that correctly reflects the visual content …
DrawBench is a comprehensive and challenging benchmark for text-to-image models, introduced by the Imagen research team. Let me provide you …
Contains 8k flickr Images with captions. Visit this page to explore the data. Cite this paper if you find it …
Recent breakthroughs in diffusion models, multimodal pretraining, and efficient finetuning have led to an explosion of text-to-image generative models. Given …
LAION-COCO is the world’s largest dataset of 600M generated high-quality captions for publicly available web-images. The images are extracted from …
T2I-CompBench is a comprehensive benchmark for open-world compositional text-to-image generation, consisting of 6,000 compositional textual prompts from 3 categories (attribute …
This dataset contains around 10000 videos generated by various methods using the Prompt list. These videos have been evaluated using …
The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 …
MSR-VTT (Microsoft Research Video to Text) is a large-scale dataset for the open domain video captioning, which consists of 10,000 …
The 20BN-SOMETHING-SOMETHING V2 dataset is a large collection of labeled video clips that show humans performing pre-defined basic actions with …
WebVid contains 10 million video clips with captions, sourced from the web. The videos are diverse and rich in their …
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
The 20 Newsgroups data set is a collection of approximately 20,000 newsgroup documents, partitioned (nearly) evenly across 20 different newsgroups.
AG News (AG’s News Corpus) is a subdataset of AG's corpus of news articles constructed by assembling titles and description …
Arxiv HEP-TH (high energy physics theory) citation graph is from the e-print arXiv and covers all the citations within a …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
Dataset of 64x64 images of a robot pushing objects on a table top. From Berkeley AI Research (BAIR). Source: Self-Supervised …
The How2Sign is a multimodal and multiview continuous American Sign Language (ASL) dataset consisting of a parallel corpus of more …
Kinetics-700 is a video dataset of 650,000 clips that covers 700 human action classes. The videos include human-object interactions such …
LAION-400M is a dataset with CLIP-filtered 400 million image-text pairs, their CLIP embeddings and kNN indices that allow efficient similarity …
MSR-VTT (Microsoft Research Video to Text) is a large-scale dataset for the open domain video captioning, which consists of 10,000 …
YouTube Driving Dataset contains a massive amount of real-world driving frames with various conditions, from different weather, different regions, to …
The dataset contains training and evaluation data for 12 languages: - Vietnamese - Romanian - Latvian - Czech - Polish …
Large Multimodal Models (LMMs) such as GPT-4V and LLaVA have shown remarkable capabilities in visual reasoning with common image styles. …
CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; …
Earth vision research typically focuses on extracting geospatial object locations and categories but neglects the exploration of relations between objects …
The GQA dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced …
The General Robust Image Task (GRIT) Benchmark is an evaluation-only benchmark for evaluating the performance and robustness of vision systems …
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities
MMBench is a multi-modality benchmark. It methodically develops a comprehensive evaluation pipeline, primarily comprised of two elements. The first element …
The MSR-VTT-QA dataset is a benchmark for the task of Visual Question Answering (VQA) on the MSR-VTT (Microsoft Research Video …
The MSVD-QA dataset is a Video Question Answering (VideoQA) dataset. It is based on the existing Microsoft Research Video Description …
MapEval-Visual contains 400 image-question-answer triplets. Each question is paired with a snapshot from google maps website. The task is the …
ViP-Bench is a comprehensive benchmark designed to assess the capability of multimodal models in understanding visual prompts across multiple dimensions. …
VisualMRC is a visual machine reading comprehension dataset that proposes a task: given a question and a document image, a …
The VizWiz-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded …
A-OKVQA is crowdsourced visual question answering dataset composed of a diverse set of about 25K questions requiring a broad base …
AI2 Diagrams (AI2D) is a dataset of over 5000 grade school science diagrams with over 150000 rich annotations, their ground …
The ActivityNet dataset contains 200 different types of activities and a total of 849 hours of videos collected from YouTube. …
Large vision-language models (LVLMs) are prone to hallucinations, where certain contextual cues in an image can trigger the language module …
CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; …
We collect a new dataset of human-posed free-form natural language questions about CLEVR images. Many of these questions have out-of-vocabulary …
CORE-MM is an Open-ended VQA benchmark dataset specifically designed for MLLMs, with a focus on complex reasoning tasks. CORE-MM benchmark …
DocVQA consists of 50,000 questions defined on 12,000+ document images. Source: DocVQA: A Dataset for VQA on Document Images
EgoSchema is very long-form video question-answering dataset, and benchmark to evaluate long video understanding capabilities of modern vision and language …
The GQA dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced …
The General Robust Image Task (GRIT) Benchmark is an evaluation-only benchmark for evaluating the performance and robustness of vision systems …
Large language models (LLMs), after being aligned with vision models and integrated into vision-language models (VLMs), can bring impressive improvement …
Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images in the daily-life context. **Icon question …
IllusionVQA is a Visual Question Answering (VQA) dataset with two sub-tasks. The first task tests comprehension on 435 instances in …
The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the …
Multi-modal Large Language Models (MLLMs) are increasingly prominent in the field of artificial intelligence. Although many benchmarks attempt to holistically …
In this project, we introduce InfoSeek, a visual question answering dataset tailored for information-seeking questions that cannot be answered with …
InfographicVQA is a dataset that comprises a diverse collection of infographics along with natural language questions and answers annotations. The …
MM-Vet: Evaluating Large Multimodal Models for Integrated Capabilities
The MSR-VTT-QA dataset is a benchmark for the task of Visual Question Answering (VQA) on the MSR-VTT (Microsoft Research Video …
The MSVD-QA dataset is a Video Question Answering (VideoQA) dataset. It is based on the existing Microsoft Research Video Description …
MVBench is a comprehensive Multi-modal Video understanding Benchmark. It was introduced to evaluate the comprehension capabilities of Multi-modal Large Language …
Outside Knowledge Visual Question Answering (OK-VQA) includes more than 14,000 questions that require external knowledge to answer. Source: [OK-VQA: A …
Vision-language modeling has enabled open-vocabulary tasks where predictions can be queried using any text prompt in a zero-shot manner. Existing …
PMC-VQA is a large-scale medical visual question-answering dataset that contains 227k VQA pairs of 149k images that cover various modalities …
Synthetic datasets have successfully been used to probe visual question-answering datasets for their reasoning abilities. CLEVR, for example, tests a …
The RetVQA dataset is a large-scale dataset designed for Retrieval-Based Visual Question Answering (RetVQA). RetVQA is a more challenging task …
Task Directed Image Understanding Challenge (TDIUC) dataset is a Visual Question Answering dataset which consists of 1.6M questions and 170K …
The TGIF-QA dataset contains 165K QA pairs for the animated GIFs from the TGIF dataset [Li et al. CVPR 2016]. …
TextVQA is a dataset to benchmark visual reasoning based on text in images. TextVQA requires models to read and reason …
This dataset provides a new split of VQA v2 (similarly to VQA-CP v2), which is built of questions that are …
The VQA-CP dataset was constructed by reorganizing VQA v2 such that the correlation between the question type and correct answer …
Visual7W is a large-scale visual question answering (QA) dataset, with object-level groundings and multimodal answers. Each question starts with one …
WHOOPS! Is a dataset and benchmark for visual commonsense. The dataset is comprised of purposefully commonsense-defying images created by designers …
WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K …
The ZS-F-VQA dataset is a new split of the F-VQA dataset for zero-shot problem. Firstly we obtain the original train/test …
The Visual Storytelling Dataset (VIST) consists of 210,819 unique photos and 50,000 stories. The images were collected from albums on …
FEWS (Few-shot Examples of Word Senses) is a few-shot dataset for English Word Sense Disambiguation (WSD) gathered from Wiktionary, an …
WiC: The Word-in-Context Dataset A reliable benchmark for the evaluation of context-sensitive word embeddings. Depending on its context, an ambiguous …
WiC-TSV is a new multi-domain evaluation benchmark for Word Sense Disambiguation. More specifically, it is a framework for Target Sense …
Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: …
Action-Based Conversations Dataset (ABCD) is a goal-oriented dialogue fully-labeled dataset with over 10K human-to-human dialogues containing 55 distinct user intents …
AfriSenti is the largest sentiment analysis dataset for under-represented African languages, covering 110,000+ annotated tweets in 14 African languages (Amharic, …
We present PeerQA, a real-world, scientific, document-level Question Answering (QA) dataset. PeerQA questions have been sourced from peer reviews, which …