ML Research Wiki / Benchmarks / Domain Generalization / ImageNet-R

ImageNet-R

Domain Generalization Benchmark

Performance Over Time

📊 Showing 39 results | 📏 Metric: Top-1 Error Rate

Top Performing Models

Rank Model Paper Top-1 Error Rate Date Code
1 Mixer-B/8-SAM When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations 76.50 2021-06-03 📦 google-research/vision_transformer 📦 ttt496/VisionTransformer
2 ViT-B/16-SAM When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations 73.60 2021-06-03 📦 google-research/vision_transformer 📦 ttt496/VisionTransformer
3 ResNet-152x2-SAM When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations 71.90 2021-06-03 📦 google-research/vision_transformer 📦 ttt496/VisionTransformer
4 ResNet-50 Deep Residual Learning for Image Recognition 63.90 2015-12-10 📦 tensorflow/models 📦 tensorflow/models 📦 tensorflow/models
5 AugMix (ResNet-50) AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty 58.90 2019-12-05 📦 rwightman/pytorch-image-models 📦 pytorch/vision 📦 keras-team/keras-cv
6 Stylized ImageNet (ResNet-50) 📚 ImageNet-trained CNNs are biased towards texture; increasing shape bias improves accuracy and robustness 58.50 2018-11-29 📦 rgeirhos/texture-vs-shape 📦 rgeirhos/Stylized-ImageNet 📦 LiYingwei/ShapeTextureDebiasedTraining
7 DeepAugment (ResNet-50) The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization 57.80 2020-06-29 📦 hendrycks/imagenet-r
8 PRIME (ResNet-50) PRIME: A few primitives can boost robustness to common corruptions 57.10 2021-12-27 📦 amodas/PRIME-augmentations
9 RVT-Ti* Towards Robust Vision Transformer 56.10 2021-05-17 📦 alibaba/easyrobust 📦 vtddggg/Robust-Vision-Transformer
10 PRIME with JSD (ResNet-50) PRIME: A few primitives can boost robustness to common corruptions 53.70 2021-12-27 📦 amodas/PRIME-augmentations

All Papers (39)

Pyramid Adversarial Training Improves ViT Performance

2021
Pyramid Adversarial Training Improves ViT (Im21k)

Context-Aware Robust Fine-Tuning

2022
CAR-FT (CLIP, ViT-L/14@336px)