ML Research Wiki / Benchmarks / Image Classification / STL-10

STL-10

Image Classification Benchmark

Performance Over Time

📊 Showing 97 results | 📏 Metric: Percentage correct

Top Performing Models

Rank Model Paper Percentage correct Date Code
1 µ2Net+ (ViT-L/16) 📚 A Continual Development Methodology for Large-scale Multitask Dynamic ML Systems 99.64 2022-09-15 📦 google-research/google-research
2 kNN-CLIP 📚 Revisiting a kNN-based Image Classification System with High-capacity Storage 99.60 2022-04-03 -
3 Wide-ResNet-101 (Spinal FC) 📚 SpinalNet: Deep Neural Network with Gradual Input 98.66 2020-07-07 📦 dipuk0506/SpinalNet 📦 dipuk0506/uq 📦 Mechachleopteryx/SpinalNet
4 NAT-M4 📚 Neural Architecture Transfer 97.90 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
5 NAT-M3 📚 Neural Architecture Transfer 97.80 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
6 SEER (RegNet10B) 📚 Vision Models Are More Robust And Fair When Pretrained On Uncurated Images Without Supervision 97.30 2022-02-16 📦 facebookresearch/vissl
7 NAT-M2 📚 Neural Architecture Transfer 97.20 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
8 NAT-M1 📚 Neural Architecture Transfer 96.70 2020-05-12 📦 human-analysis/neural-architecture-transfer 📦 awesomelemon/encas
9 EnAET 📚 EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations 95.48 2019-11-21 📦 maple-research-lab/EnAET 📦 wang3702/EnAET
10 VGG-19bn 📚 SpinalNet: Deep Neural Network with Gradual Input 95.44 2020-07-07 📦 dipuk0506/SpinalNet 📦 dipuk0506/uq 📦 Mechachleopteryx/SpinalNet

All Papers (97)

Extended Batch Normalization

2020
ResNet18(BN, 4)

Extended Batch Normalization

2020
ResNet18(GN, 4)

Extended Batch Normalization

2020
ResNet18(BN, 128)

Extended Batch Normalization

2020
ResNet18(EBN, 4)

Extended Batch Normalization

2020
ResNet18(EBN, 128)

Extended Batch Normalization

2020
ResNet18(GN, 128)

No more meta-parameter tuning in unsupervised sparse feature learning

2014
No more meta-parameter tuning in unsupervised sparse feature learning