ML Research Wiki / Benchmarks / Prompt Engineering / ImageNet-A

ImageNet-A

Prompt Engineering Benchmark

Performance Over Time

📊 Showing 9 results | 📏 Metric: Top-1 accuracy %

Top Performing Models

Rank Model Paper Top-1 accuracy % Date Code
1 POMP Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition 51.60 2023-04-10 📦 amazon-science/prompt-pretraining
2 MMRL MMRL: Multi-Modal Representation Learning for Vision-Language Models 51.20 2025-03-11 📦 yunncheng/MMRL
3 HPT++ HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling 51.18 2024-08-27 📦 vill-lab/2024-aaai-hpt 📦 ThomasWangY/2024-AAAI-HPT
4 MaPLe MaPLe: Multi-modal Prompt Learning 50.90 2022-10-06 📦 muzairkhattak/multimodal-prompt-learning 📦 htyao89/kgcoop 📦 gyukai/i2vc
5 PromptSRC Self-regulating Prompts: Foundational Model Adaptation without Forgetting 50.90 2023-07-13 📦 muzairkhattak/promptsrc 📦 asif-hanif/vafa
6 HPT Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models 50.85 2023-12-11 📦 vill-lab/2024-aaai-hpt 📦 ThomasWangY/2024-AAAI-HPT
7 CoCoOp Conditional Prompt Learning for Vision-Language Models 50.63 2022-03-10 📦 kaiyangzhou/coop 📦 muzairkhattak/multimodal-prompt-learning 📦 azshue/TPT
8 CoPrompt Consistency-guided Prompt Learning for Vision-Language Models 50.50 2023-06-01 📦 shuvenduroy/coprompt 📦 ShuvenduRoy/FER_TL_PipelineTraining
9 CLIP Learning Transferable Visual Models From Natural Language Supervision 47.77 2021-02-26 📦 openai/CLIP 📦 mlfoundations/open_clip 📦 towhee-io/towhee

All Papers (9)