WebQuestions Semantic Parses Dataset
The WebQuestionsSP dataset is released as part of our ACL-2016 paper “The Value of Semantic Parse Labeling for Knowledge Base Question Answering” [Yih, Richardson, Meek, Chang & Suh, 2016], in which we evaluated the value of gathering semantic parses, vs. answers, for a set of questions that originally comes from WebQuestions [Berant et al., 2013]. The WebQuestionsSP dataset contains full semantic parses in SPARQL queries for 4,737 questions, and “partial” annotations for the remaining 1,073 questions for which a valid parse could not be formulated or where the question itself is bad or needs a descriptive answer. This release also includes an evaluation script and the output of the STAGG semantic parsing system when trained using the full semantic parses. More detail can be found in the document and labeling instructions included in this release, as well as the paper.
Variants: WebQSP-WD, WebQuestionsSP
This dataset is used in 2 benchmarks:
Task | Model | Paper | Date |
---|---|---|---|
Question Answering | ChatGPT | Can ChatGPT Replace Traditional KBQA … | 2023-03-14 |
Semantic Parsing | ReaRev | ReaRev: Adaptive Reasoning for Question … | 2022-10-24 |
Semantic Parsing | CBR-KBQA | Case-based Reasoning for Natural Language … | 2021-04-18 |
Semantic Parsing | NSM+h | Improving Multi-hop Knowledge Base Question … | 2021-01-11 |
Semantic Parsing | T5-11B (Raffel et al., 2020) | Exploring the Limits of Transfer … | 2019-10-23 |
Recent papers with results on this dataset: