InDL

In-Diagram Logic

Dataset Information
Modalities
Images
Languages
English, Chinese
Introduced
2023
License
Homepage

Overview

Dataset Introduction

In this work, we introduce the In-Diagram Logic (InDL) dataset, an innovative resource crafted to rigorously evaluate the logic interpretation abilities of deep learning models. This dataset leverages the complex domain of visual illusions, providing a unique challenge for these models.

The InDL dataset is characterized by its intricate assembly of optical illusions, wherein each instance poses a specific logic interpretation challenge. These illusions are constructed based on six classic geometric optical illusions, known for their intriguing interplay between perception and logic.

Motivations and Content

The motivation behind the creation of the InDL dataset arises from a recognized gap in current deep learning research. While models have exhibited remarkable proficiency in various domains such as image recognition and natural language processing, their performance in tasks requiring logical reasoning remains less understood and often opaque due to their inherent 'black box' characteristics. By using the medium of visual illusions, the InDL dataset aims to probe these models in a unique and challenging way, helping to illuminate their logic interpretation capabilities.

The InDL dataset is a comprehensive collection of instances where each visual illusion varies in illusion strength. The strength signifies the degree of distortion introduced to challenge the models' logic interpretation. Hence, the dataset not only offers a complexity gradient for model evaluation but also allows the analysis of model performance against varying degrees of challenge intensity.

Potential Use Cases

The potential use cases of the InDL dataset are extensive. Beyond the primary goal of evaluating deep learning models' logic interpretation abilities, it also presents a robust tool for researchers to investigate how models react to visual perception challenges. This opens avenues to understand how these models can be improved and how their decision-making processes can be better interpreted.

Additionally, the InDL dataset could provide a rich testing ground for model developers. Its diverse and challenging instances could allow them to rigorously benchmark their models and detect potential weaknesses that might be overlooked in more conventional datasets.

Furthermore, the InDL dataset could serve as a valuable resource for teaching and learning purposes. It provides a visually engaging and intellectually stimulating way to explore the capabilities and limitations of deep learning models, particularly in the realm of logic interpretation.

Variants: InDL

Associated Benchmarks

This dataset is used in 1 benchmark:

Recent Benchmark Submissions

Task Model Paper Date
Classification ConvNext A ConvNet for the 2020s 2022-01-10
Classification ResNetV2_50 ResNet strikes back: An improved … 2021-10-01
Classification MobileNetV3 Searching for MobileNetV3 2019-05-06
Classification Darknet53 YOLOv3: An Incremental Improvement 2018-04-08
Classification NASNetLarge Learning Transferable Architectures for Scalable … 2017-07-21
Classification Xception Xception: Deep Learning with Depthwise … 2016-10-07
Classification DenseNet201 Densely Connected Convolutional Networks 2016-08-25
Classification Inception ResNet V2 Inception-v4, Inception-ResNet and the Impact … 2016-02-23
Classification VGG16 Very Deep Convolutional Networks for … 2014-09-04

Research Papers

Recent papers with results on this dataset: