MELD

Multimodal EmotionLines Dataset

Dataset Information
Modalities
Videos, Texts
Languages
English, Chinese, Russian
Introduced
2019
License
Unknown
Homepage

Overview

Multimodal EmotionLines Dataset (MELD) has been created by enhancing and extending EmotionLines dataset. MELD contains the same dialogue instances available in EmotionLines, but it also encompasses audio and visual modality along with text. MELD has more than 1400 dialogues and 13000 utterances from Friends TV series. Multiple speakers participated in the dialogues. Each utterance in a dialogue has been labeled by any of these seven emotions -- Anger, Disgust, Sadness, Joy, Neutral, Surprise and Fear. MELD also has sentiment (positive, negative and neutral) annotation for each utterance.

Source: https://affective-meld.github.io/
Image Source: https://affective-meld.github.io/

Variants: MELD

Associated Benchmarks

This dataset is used in 2 benchmarks:

Recent Benchmark Submissions

Task Model Paper Date
Multimodal Emotion Recognition GraphSmile Tracing Intricate Cues in Dialogue: … 2024-07-31
Facial Expression Recognition ConCluGen Multi-Task Multi-Modal Self-Supervised Learning for … 2024-04-16
Multimodal Emotion Recognition Joyful Joyful: Joint Modality Fusion and … 2023-11-18
Multimodal Emotion Recognition Audio + Text (Stage III) HCAM -- Hierarchical Cross Attention … 2023-04-14

Research Papers

Recent papers with results on this dataset: