ML Research Wiki / Benchmarks / Prompt Engineering / ImageNet-S

ImageNet-S

Prompt Engineering Benchmark

Performance Over Time

📊 Showing 9 results | 📏 Metric: Top-1 accuracy %

Top Performing Models

Rank Model Paper Top-1 accuracy % Date Code
1 POMP Prompt Pre-Training with Twenty-Thousand Classes for Open-Vocabulary Visual Recognition 49.80 2023-04-10 📦 amazon-science/prompt-pretraining
2 PromptSRC Self-regulating Prompts: Foundational Model Adaptation without Forgetting 49.55 2023-07-13 📦 muzairkhattak/promptsrc 📦 asif-hanif/vafa
3 CoPrompt Consistency-guided Prompt Learning for Vision-Language Models 49.43 2023-06-01 📦 shuvenduroy/coprompt 📦 ShuvenduRoy/FER_TL_PipelineTraining
4 HPT Learning Hierarchical Prompt with Structured Linguistic Knowledge for Vision-Language Models 49.36 2023-12-11 📦 vill-lab/2024-aaai-hpt 📦 ThomasWangY/2024-AAAI-HPT
5 HPT++ HPT++: Hierarchically Prompting Vision-Language Models with Multi-Granularity Knowledge Generation and Improved Structure Modeling 49.28 2024-08-27 📦 vill-lab/2024-aaai-hpt 📦 ThomasWangY/2024-AAAI-HPT
6 MMRL MMRL: Multi-Modal Representation Learning for Vision-Language Models 49.17 2025-03-11 📦 yunncheng/MMRL
7 MaPLe MaPLe: Multi-modal Prompt Learning 49.15 2022-10-06 📦 muzairkhattak/multimodal-prompt-learning 📦 htyao89/kgcoop 📦 gyukai/i2vc
8 CoCoOp Conditional Prompt Learning for Vision-Language Models 48.75 2022-03-10 📦 kaiyangzhou/coop 📦 muzairkhattak/multimodal-prompt-learning 📦 azshue/TPT
9 CLIP Learning Transferable Visual Models From Natural Language Supervision 46.15 2021-02-26 📦 openai/CLIP 📦 mlfoundations/open_clip 📦 towhee-io/towhee

All Papers (9)