ML Research Wiki / Benchmarks / Zero-Shot Transfer Image Classification / ImageNet-R

ImageNet-R

Zero-Shot Transfer Image Classification Benchmark

Performance Over Time

📊 Showing 11 results | 📏 Metric: Accuracy

Top Performing Models

Rank Model Paper Accuracy Date Code
1 CoCa CoCa: Contrastive Captioners are Image-Text Foundation Models 96.50 2022-05-04 📦 mlfoundations/open_clip 📦 facebookresearch/multimodal 📦 lucidrains/CoCa-pytorch
2 LiT ViT-e PaLI: A Jointly-Scaled Multilingual Language-Image Model 96.10 2022-09-14 📦 google-research/big_vision
3 LiT-22B Scaling Vision Transformers to 22 Billion Parameters 96.00 2023-02-10 📦 lucidrains/flash-cosine-sim-attention
4 BASIC Combined Scaling for Zero-shot Transfer Learning 95.70 2021-11-19 -
5 EVA-CLIP-18B EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters 95.70 2024-02-06 📦 baaivision/EVA 📦 baaivision/eva
6 EVA-CLIP-E/14+ EVA-CLIP: Improved Training Techniques for CLIP at Scale 94.50 2023-03-27 📦 baaivision/eva 📦 PaddlePaddle/PaddleMIX 📦 Yui010206/CREMA 📦 jaehong31/raccoon
7 LiT-tuning LiT: Zero-Shot Transfer with Locked-image text Tuning 93.90 2021-11-15 📦 mlfoundations/open_clip 📦 google-research/vision_transformer 📦 google-research/big_vision 📦 laion-ai/clip_benchmark 📦 eify/clip_benchmark
8 ALIGN Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision 92.20 2021-02-11 📦 facebookresearch/metaclip 📦 kakaobrain/coyo-dataset 📦 MicPie/clasp 📦 willard-yuan/video-text-retrieval-papers 📦 pwc-1/Paper-8
9 CLIP Learning Transferable Visual Models From Natural Language Supervision 88.90 2021-02-26 📦 openai/CLIP 📦 mlfoundations/open_clip 📦 towhee-io/towhee
10 AltCLIP AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities 87.20 2022-11-12 📦 flagai-open/flagai 📦 pwc-1/Paper-8

All Papers (11)