Joint Attention in Autonomous Driving
JAAD is a dataset for studying joint attention in the context of autonomous driving. The focus is on pedestrian and driver behaviors at the point of crossing and factors that influence them. To this end, JAAD dataset provides a richly annotated collection of 346 short video clips (5-10 sec long) extracted from over 240 hours of driving footage. These videos filmed in several locations in North America and Eastern Europe represent scenes typical for everyday urban driving in various weather conditions.
Bounding boxes with occlusion tags are provided for all pedestrians making this dataset suitable for pedestrian detection.
Behavior annotations specify behaviors for pedestrians that interact with or require attention of the driver. For each video there are several tags (weather, locations, etc.) and timestamped behavior labels from a fixed list (e.g. stopped, walking, looking, etc.). In addition, a list of demographic attributes is provided for each pedestrian (e.g. age, gender, direction of motion, etc.) as well as a list of visible traffic scene elements (e.g. stop sign, traffic signal, etc.) for each frame.
Paper: Are They Going to Cross? A Benchmark Dataset and Baseline for Pedestrian Crosswalk Behavior
Source: JAAD
Image Source: Are They Going to Cross? A Benchmark Dataset and Baseline for Pedestrian Crosswalk Behavior
Variants: JAAD
This dataset is used in 1 benchmark:
Task | Model | Paper | Date |
---|---|---|---|
Trajectory Prediction | SGNet | Stepwise Goal-Driven Networks for Trajectory … | 2021-03-25 |
Trajectory Prediction | BiTrap-D | BiTraP: Bi-directional Pedestrian Trajectory Prediction … | 2020-07-29 |
Trajectory Prediction | FOL-X | Unsupervised Traffic Accident Detection in … | 2019-03-02 |
Trajectory Prediction | Bayesian-LSTM | Long-Term On-Board Prediction of People … | 2017-11-24 |
Recent papers with results on this dataset: