ML Research Wiki / Benchmarks / Prompt Engineering / ImageNet-R

ImageNet-R

Prompt Engineering Benchmark

Performance Over Time

📊 Showing 9 results | 📏 Metric: Top-1 accuracy %

Top Performing Models

Rank Model Paper Top-1 accuracy % Date Code
1 POMP Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition 77.90 2023-04-10 📦 amazon-science/prompt-pretraining
2 PromptSRC Self-regulating Prompts: Foundational Model Adaptation without Forgetting 77.80 2023-07-13 📦 muzairkhattak/promptsrc 📦 asif-hanif/vafa
3 MMRL MMRL: Multi-Modal Representation Learning for Vision-Language Models 77.53 2025-03-11 📦 yunncheng/MMRL
4 HPT++ HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling 77.52 2024-08-27 📦 vill-lab/2024-aaai-hpt 📦 ThomasWangY/2024-AAAI-HPT
5 CoPrompt Consistency-guided Prompt Learning for Vision-Language Models 77.51 2023-06-01 📦 shuvenduroy/coprompt 📦 ShuvenduRoy/FER_TL_PipelineTraining
6 HPT Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models 77.38 2023-12-11 📦 vill-lab/2024-aaai-hpt 📦 ThomasWangY/2024-AAAI-HPT
7 MaPLe MaPLe: Multi-modal Prompt Learning 76.98 2022-10-06 📦 muzairkhattak/multimodal-prompt-learning 📦 htyao89/kgcoop 📦 gyukai/i2vc
8 CoCoOP Conditional Prompt Learning for Vision-Language Models 76.18 2022-03-10 📦 kaiyangzhou/coop 📦 muzairkhattak/multimodal-prompt-learning 📦 azshue/TPT
9 CLIP Learning Transferable Visual Models From Natural Language Supervision 73.96 2021-02-26 📦 openai/CLIP 📦 mlfoundations/open_clip 📦 towhee-io/towhee

All Papers (9)