The WebNLG corpus comprises of sets of triplets describing facts (entities and relations between them) and the corresponding facts in form of natural language text. The corpus contains sets with up to 7 triplets each along with one or more reference texts for each set. The test set is split into two parts: seen, containing inputs created for entities and relations belonging to DBpedia categories that were seen in the training data, and unseen, containing inputs extracted for entities and relations belonging to 5 unseen categories.
Initially, the dataset was used for the WebNLG natural language generation challenge which consists of mapping the sets of triplets to text, including referring expression generation, aggregation, lexicalization, surface realization, and sentence segmentation.
The corpus is also used for a reverse task of triplets extraction.
Versioning history of the dataset can be found here.
Source: Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation
Image Source: https://paperswithcode.com/paper/creating-training-corpora-for-nlg-micro/
It's also available here: https://huggingface.co/datasets/web_nlg
Note: "The v3 release (release_v3.0_en, release_v3.0_ru) for the WebNLG2020 challenge also supports a semantic parsing task."
Variants: WebNLG 3.0, WebNLG 2.0 (Unconstrained), WebNLG 2.0 (Constrained), WebNLG (Unseen), WebNLG (Seen), WebNLG (All), WebNLG (Constrained), WebNLG(C), WebNLG(U), WebNLG en, WebNLG v2.1, WebNLG Full, WebNLG
This dataset is used in 3 benchmarks:
Recent papers with results on this dataset: