Visual7W is a large-scale visual question answering (QA) dataset, with object-level groundings and multimodal answers. Each question starts with one of the seven Ws, what, where, when, who, why, how and which. It is collected from 47,300 COCO images and it has 327,929 QA pairs, together with 1,311,756 human-generated multiple-choices and 561,459 object groundings from 36,579 categories.
Source: https://github.com/yukezhu/visual7w-toolkit
Image Source: http://ai.stanford.edu/~yukez/visual7w/
Variants: Visual7W
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Visual Question Answering (VQA) | CFR | Coarse-to-Fine Reasoning for Visual Question … | 2021-10-06 |
Visual Question Answering (VQA) | CTI (with Boxes) | Compact Trilinear Interaction for Visual … | 2019-09-26 |
Visual Question Answering (VQA) | CMN | Modeling Relationships in Referential Expressions … | 2016-11-30 |
Visual Question Answering (VQA) | MCB+Att. | Multimodal Compact Bilinear Pooling for … | 2016-06-06 |
Recent papers with results on this dataset: