TweetEval introduces an evaluation framework consisting of seven heterogeneous Twitter-specific classification tasks.
Source: TweetEval: Unified Benchmark and Comparative Evaluation for Tweet Classification
Image Source: https://arxiv.org/pdf/2010.12421v2.pdf
Variants: TweetEval, tweet_eval, irony
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Sentiment Analysis | RoB-RT | XLM-T: Multilingual Language Models in … | 2021-04-25 |
Sentiment Analysis | RoBERTa-Base | TweetEval: Unified Benchmark and Comparative … | 2020-10-23 |
Sentiment Analysis | FastText | TweetEval: Unified Benchmark and Comparative … | 2020-10-23 |
Sentiment Analysis | LSTM | TweetEval: Unified Benchmark and Comparative … | 2020-10-23 |
Sentiment Analysis | RoBERTa-Twitter | TweetEval: Unified Benchmark and Comparative … | 2020-10-23 |
Sentiment Analysis | SVM | TweetEval: Unified Benchmark and Comparative … | 2020-10-23 |
Sentiment Analysis | BERTweet | BERTweet: A pre-trained language model … | 2020-05-20 |
Recent papers with results on this dataset: