ML Research Wiki / Benchmarks / Named Entity Recognition (NER) / SLUE

SLUE

Named Entity Recognition (NER) Benchmark

Performance Over Time

📊 Showing 13 results | 📏 Metric: F1 (%)

Top Performing Models

Rank Model Paper F1 (%) Date Code
1 W2V2-L-LL60K (pipeline approach, uses LM) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 69.60 2021-11-19 📦 asappresearch/slue-toolkit
2 W2V2-B-LS960 (pipeline approach, uses LM) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 68.00 2021-11-19 📦 asappresearch/slue-toolkit
3 Wav2Seq (from HuBERT-large) Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages 65.40 2022-05-02 📦 asappresearch/wav2seq
4 W2V2-L-LL60K (e2e approach, uses LM) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 64.80 2021-11-19 📦 asappresearch/slue-toolkit
5 W2V2-B-LS960 (e2e approach, uses LM) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 63.40 2021-11-19 📦 asappresearch/slue-toolkit
6 HuBERT-B-LS960 (e2e approach, uses LM) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 61.90 2021-11-19 📦 asappresearch/slue-toolkit
7 W2V2-B-VP100K (e2e approach, uses LM) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 61.80 2021-11-19 📦 asappresearch/slue-toolkit
8 W2V2-L-LL60K (pipeline approach) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 57.80 2021-11-19 📦 asappresearch/slue-toolkit
9 W2V2-L-LL60K (e2e approach) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 50.90 2021-11-19 📦 asappresearch/slue-toolkit
10 W2V2-B-LS960 (e2e approach) SLUE: New Benchmark Tasks for Spoken Language Understanding Evaluation on Natural Speech 50.20 2021-11-19 📦 asappresearch/slue-toolkit

All Papers (13)