TANDA DeBERTa-V3-Large + ALL
|
Structural Self-Supervised Objectives for Transfo…
|
0.95
|
2023-09-15
|
|
TANDA-RoBERTa (ASNQ, TREC-QA)
|
TANDA: Transfer and Adapt Pre-Trained Transformer…
|
0.94
|
2019-11-11
|
|
DeBERTa-V3-Large + SSP
|
Pre-training Transformer Models with Sentence-Lev…
|
0.92
|
2022-05-20
|
|
Contextual DeBERTa-V3-Large + SSP
|
Context-Aware Transformer Pre-Training for Answer…
|
0.92
|
2023-05-24
|
|
RLAS-BIABC
|
RLAS-BIABC: A Reinforcement Learning-Based Answer…
|
0.91
|
2023-01-07
|
|
RoBERTa-Base Joint + MSPP
|
Paragraph-based Transformer Pre-training for Mult…
|
0.91
|
2022-05-02
|
|
RoBERTa-Base + PSD
|
Pre-training Transformer Models with Sentence-Lev…
|
0.90
|
2022-05-20
|
|
Comp-Clip + LM + LC
|
A Compare-Aggregate Model with Latent Clustering …
|
0.87
|
2019-05-30
|
|
NLP-Capsule
|
Towards Scalable and Reliable Capsule Networks fo…
|
0.78
|
2019-06-06
|
|
HyperQA
|
Hyperbolic Representation Learning for Fast and E…
|
0.77
|
2017-07-25
|
|
aNMM
|
aNMM: Ranking Short Answer Texts with Attention-B…
|
0.75
|
2018-01-05
|
|
CNN
|
Deep Learning for Answer Sentence Selection
|
0.71
|
2014-12-04
|
|