ML Research Wiki / Benchmarks / Zero-Shot Transfer Image Classification / ObjectNet

ObjectNet

Zero-Shot Transfer Image Classification Benchmark

Performance Over Time

📊 Showing 9 results | 📏 Metric: Accuracy (Private)

Top Performing Models

Rank Model Paper Accuracy (Private) Date Code
1 LiT-22B Scaling Vision Transformers to 22 Billion Parameters 87.60 2023-02-10 📦 lucidrains/flash-cosine-sim-attention
2 LiT ViT-e PaLI: A Jointly-Scaled Multilingual Language-Image Model 84.90 2022-09-14 📦 google-research/big_vision
3 CoCa CoCa: Contrastive Captioners are Image-Text Foundation Models 82.70 2022-05-04 📦 mlfoundations/open_clip 📦 facebookresearch/multimodal 📦 lucidrains/CoCa-pytorch
4 EVA-CLIP-18B EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters 82.20 2024-02-06 📦 baaivision/EVA 📦 baaivision/eva
5 LiT-tuning LiT: Zero-Shot Transfer with Locked-image text Tuning 81.10 2021-11-15 📦 mlfoundations/open_clip 📦 google-research/vision_transformer 📦 google-research/big_vision 📦 laion-ai/clip_benchmark 📦 eify/clip_benchmark
6 InternVL-C InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks 80.60 2023-12-21 📦 opengvlab/internvl 📦 opengvlab/internvl-mmdetseg
7 EVA-CLIP-E/14+ EVA-CLIP: Improved Training Techniques for CLIP at Scale 79.60 2023-03-27 📦 baaivision/eva 📦 PaddlePaddle/PaddleMIX 📦 Yui010206/CREMA 📦 jaehong31/raccoon
8 CLIP Learning Transferable Visual Models From Natural Language Supervision 72.30 2021-02-26 📦 openai/CLIP 📦 mlfoundations/open_clip 📦 towhee-io/towhee
9 PaLI PaLI: A Jointly-Scaled Multilingual Language-Image Model 42.62 2022-09-14 📦 google-research/big_vision

All Papers (9)