ML Research Wiki / Benchmarks / Fine-Grained Image Classification / Oxford-IIIT Pets

Oxford-IIIT Pets

Fine-Grained Image Classification Benchmark

Performance Over Time

📊 Showing 19 results | 📏 Metric: Accuracy

Top Performing Models

Rank Model Paper Accuracy Date Code
1 EffNet-L2 (SAM) Sharpness-Aware Minimization for Efficiently Improving Generalization 97.10 2020-10-03 📦 davda54/sam 📦 google-research/sam 📦 moskomule/sam.pytorch
2 BiT-L (ResNet) Big Transfer (BiT): General Visual Representation Learning 96.62 2019-12-24 📦 google-research/big_transfer 📦 sayakpaul/FunMatch-Distillation 📦 bethgelab/InDomainGeneralizationBenchmark
3 µ2Net+ (ViT-L/16) A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems 95.50 2022-09-15 📦 google-research/google-research
4 µ2Net (ViT-L/16) An Evolutionary Approach to Dynamic Introduction of Tasks in Large-scale Multitask Learning Systems 95.30 2022-05-25 📦 google-research/google-research
5 BiT-M (ResNet) Big Transfer (BiT): General Visual Representation Learning 94.47 2019-12-24 📦 google-research/big_transfer 📦 sayakpaul/FunMatch-Distillation 📦 bethgelab/InDomainGeneralizationBenchmark
6 NAT-M4 Neural Architecture Transfer 94.30 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
7 NAT-M3 Neural Architecture Transfer 94.10 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
8 NAT-M2 Neural Architecture Transfer 93.50 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
9 ResNet-152-SAM When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations 93.30 2021-06-03 📦 google-research/vision_transformer 📦 ttt496/VisionTransformer
10 ViT-B/16- SAM When Vision Transformers Outperform ResNets without Pre-training or Strong Data Augmentations 93.10 2021-06-03 📦 google-research/vision_transformer 📦 ttt496/VisionTransformer

All Papers (19)