Existing benchmarks for temporal QA focus on a single information source (either a KB or a text corpus), and include only few questions with implicit constraints. we devise a new method for automatically creating temporal questions with implicit constraints, with systematic controllability of different aspects, including the relative importance of different source types (text, infoboxes, KB), fractions of prominent vs. long-tail entities, question complexity, and more.
Variants: TIQ
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Question Answering | FAITH | Faithful Temporal Question Answering over … | 2024-02-23 |
Question Answering | Explaignn | Explainable Conversational Question Answering over … | 2023-05-02 |
Question Answering | Gpt-4 | GPT-4 Technical Report | 2023-03-15 |
Question Answering | InstructGpt | Training language models to follow … | 2022-03-04 |
Question Answering | TempoQR | TempoQR: Temporal Question Reasoning over … | 2021-12-10 |
Question Answering | Exaqt | Complex Temporal Question Answering on … | 2021-09-18 |
Question Answering | Uniqorn | UNIQORN: Unified Question Answering over … | 2021-08-19 |
Question Answering | CronKGQA | Question Answering Over Temporal Knowledge … | 2021-06-03 |
Question Answering | Unik-Qa | UniK-QA: Unified Representations of Structured … | 2020-12-29 |
Recent papers with results on this dataset: