Multicue Dataset for Edge Detection
In order to study the interaction of several early visual cues (luminance, color, stereo, motion) during boundary detection in challenging natural scenes, we have built a multi-cue video dataset composed of short binocular video sequences of natural scenes using a consumer-grade Fujifilm stereo camera (Mély, Kim, McGill, Guo and Serre, 2016). We considered a variety of places (from university campuses to street scenes and parks) and seasons to minimize possible biases. We attempted to capture more challenging scenes for boundary detection by framing a few dominant objects in each shot under a variety of appearances. Representative sample keyframes are shown on the figure below. The dataset contains 100 scenes, each consisting of a left and right view short (10-frame) color sequence. Each sequence was sampled at a rate of 30 frames per second. Each frame has a resolution of 1280 by 720 pixels.
Variants: MDBD
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Edge Detection | DexiNed-a | Dense Extreme Inception Network for … | 2021-12-04 |
Edge Detection | DexiNed-f | Dense Extreme Inception Network for … | 2021-12-04 |
Edge Detection | CATS | Unmixing Convolutional Features for Crisp … | 2020-11-19 |
Edge Detection | BDCN | Bi-Directional Cascade Network for Perceptual … | 2019-02-28 |
Edge Detection | RCF | Richer Convolutional Features for Edge … | 2016-12-07 |
Recent papers with results on this dataset: