ML Research Wiki / Benchmarks / Discourse Parsing / RST-DT

RST-DT

Discourse Parsing Benchmark

Performance Over Time

📊 Showing 27 results | 📏 Metric: Standard Parseval (Full)

Top Performing Models

Rank Model Paper Standard Parseval (Full) Date Code
1 Guz et al. (2020) Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining 72.43 2020-12-01 -
2 Bottom-up Llama 2 (70B) Can we obtain significant success in RST discourse parsing by using Large Language Models? 70.40 2024-03-08 📦 nttcslab-nlp/rstparser_eacl24
3 Top-down Llama 2 (70B) Can we obtain significant success in RST discourse parsing by using Large Language Models? 68.70 2024-03-08 📦 nttcslab-nlp/rstparser_eacl24
4 Bottom-up Llama 2 (13B) Can we obtain significant success in RST discourse parsing by using Large Language Models? 68.10 2024-03-08 📦 nttcslab-nlp/rstparser_eacl24
5 Top-down Llama 2 (13B) Can we obtain significant success in RST discourse parsing by using Large Language Models? 67.90 2024-03-08 📦 nttcslab-nlp/rstparser_eacl24
6 Top-down (DeBERTa) A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing 67.90 2022-10-15 📦 nttcslab-nlp/rstparser_emnlp22
7 Bottom-up Llama 2 (7B) Can we obtain significant success in RST discourse parsing by using Large Language Models? 67.50 2024-03-08 📦 nttcslab-nlp/rstparser_eacl24
8 Top-down (XLNet) A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing 67.40 2022-10-15 📦 nttcslab-nlp/rstparser_emnlp22
9 Top-down (RoBERTa) A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing 66.60 2022-10-15 📦 nttcslab-nlp/rstparser_emnlp22
10 Bottom-up (RoBERTa) A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing 66.50 2022-10-15 📦 nttcslab-nlp/rstparser_emnlp22

All Papers (27)

Cross-lingual RST Discourse Parsing

2017
Transition-Based Parser Trained on Cross-Lingual Corpus