📊 Showing 1 results | 📏 Metric: ClipMatch@1
Rank | Model | Paper | ClipMatch@1 | Date | Code |
---|---|---|---|---|---|
1 | BLIP-2 OPT | Open-ended VQA benchmarking of Vision-Language models by exploiting Classification datasets and their semantic hierarchy | 35.49 | 2024-02-11 | 📦 lmb-freiburg/ovqa |