Salicency in Context
The SALIency in CONtext (SALICON) dataset contains 10,000 training images, 5,000 validation images and 5,000 test images for saliency prediction. This dataset has been created by annotating saliency in images from MS COCO.
The ground-truth saliency annotations include fixations generated from mouse trajectories. To improve the data quality, isolated fixations with low local density have been excluded.
The training and validation sets, provided with ground truth, contain the following data fields: image, resolution and gaze.
The testing data contains only the image and resolution fields.
Source: DeepFix: A Fully Convolutional Neural Network for predicting Human Eye Fixations
Image Source: http://salicon.net/explore/
Variants: SALICON->WebpageSaliency - 1-shot, SALICON->WebpageSaliency - 5-shot , SALICON->WebpageSaliency - 10-shot , SALICON->WebpageSaliency - EUB, SALICON
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Saliency Prediction | SalNAS-XL + Self-KD | SalNAS: Efficient Saliency-prediction Neural Architecture … | 2024-07-29 |
Saliency Prediction | SUM | SUM: Saliency Unification through Mamba … | 2024-06-25 |
Saliency Prediction | MDS-ViTNet | MDS-ViTNet: Improving saliency prediction for … | 2024-05-29 |
Saliency Prediction | TempSAL | TempSAL -- Uncovering Temporal Information … | 2023-01-01 |
Saliency Prediction | TranSalNet | TranSalNet: Towards perceptually relevant visual … | 2021-10-07 |
Recent papers with results on this dataset: