The iLIDS-VID dataset is a person re-identification dataset which involves 300 different pedestrians observed across two disjoint camera views in public open space. It comprises 600 image sequences of 300 distinct individuals, with one pair of image sequences from two camera views for each person. Each image sequence has variable length ranging from 23 to 192 image frames, with an average number of 73. The iLIDS-VID dataset is very challenging due to clothing similarities among people, lighting and viewpoint variations across camera views, cluttered background and random occlusions.
Source: http://www.eecs.qmul.ac.uk/~xiatian/downloads_qmul_iLIDS-VID_ReID_dataset.html
Image Source: http://www.eecs.qmul.ac.uk/~xiatian/downloads_qmul_iLIDS-VID_ReID_dataset.html
Variants: iLIDS-VID
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Person Re-Identification | PiT | Multi-direction and Multi-scale Pyramid in … | 2022-02-12 |
Person Re-Identification | uPMnet | Exploiting Robust Unsupervised Video Person … | 2021-11-09 |
Person Re-Identification | STRF | Spatio-Temporal Representation Factorization for Video-based … | 2021-07-25 |
Person Re-Identification | MGH | Learning Multi-Granular Hypergraphs for Video-Based … | 2021-04-30 |
Person Re-Identification | FGReID | Fine-Grained Re-Identification | 2020-11-26 |
Person Re-Identification | AGRL | Adaptive Graph Representation Learning for … | 2019-09-05 |
Person Re-Identification | TKP | Temporal Knowledge Propagation for Image-to-Video … | 2019-08-11 |
Person Re-Identification | UTAL | Unsupervised Tracklet Person Re-Identification | 2019-03-01 |
Recent papers with results on this dataset: