Offensive Language Identification Dataset
The OLID is a hierarchical dataset to identify the type and the target of offensive texts in social media. The dataset is collected on Twitter and publicly available. There are 14,100 tweets in total, in which 13,240 are in the training set, and 860 are in the test set. For each tweet, there are three levels of labels: (A) Offensive/Not-Offensive, (B) Targeted-Insult/Untargeted, (C) Individual/Group/Other. The relationship between them is hierarchical. If a tweet is offensive, it can have a target or no target. If it is offensive to a specific target, the target can be an individual, a group, or some other objects. This dataset is used in the OffensEval-2019 competition in SemEval-2019.
Source: Kungfupanda at SemEval-2020 Task 12: BERT-Based Multi-Task Learning for Offensive Language Detection
Image Source: https://arxiv.org/pdf/1902.09666.pdf
Variants: OLID
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Hate Speech Detection | RoBERTa-large-ST | Noisy Self-Training with Data Augmentations … | 2023-07-31 |
Recent papers with results on this dataset: