NOAH-ViTB/16
|
Neural Prompt Search
|
47.60
|
2022-06-09
|
|
SwinTransformer
|
Swin Transformer: Hierarchical Vision Transformer…
|
46.40
|
2021-03-25
|
|
Bamboo-R50
|
Bamboo: Building Mega-Scale Vision Dataset Contin…
|
45.40
|
2022-03-15
|
|
Adapter-ViTB/16
|
Parameter-Efficient Transfer Learning for NLP
|
44.50
|
2019-02-02
|
|
CLIP-RN50
|
Learning Transferable Visual Models From Natural …
|
42.10
|
2021-02-26
|
|
IG-1B
|
Billion-scale semi-supervised learning for image …
|
40.40
|
2019-05-02
|
|
BiT-M
|
Big Transfer (BiT): General Visual Representation…
|
40.40
|
2019-12-24
|
|
DINO
|
Emerging Properties in Self-Supervised Vision Tra…
|
38.90
|
2021-04-29
|
|
SwAV
|
Unsupervised Learning of Visual Features by Contr…
|
38.30
|
2020-06-17
|
|
ResNet-101
|
Deep Residual Learning for Image Recognition
|
37.40
|
2015-12-10
|
|
MEAL-V2
|
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1…
|
36.60
|
2020-09-17
|
|
MoPro-V2
|
MoPro: Webly Supervised Learning with Momentum Pr…
|
36.10
|
2020-09-17
|
|
EfficientNetB4
|
EfficientNet: Rethinking Model Scaling for Convol…
|
35.80
|
2019-05-28
|
|
MoCoV2
|
Momentum Contrast for Unsupervised Visual Represe…
|
34.80
|
2019-11-13
|
|
ResNet-50
|
Deep Residual Learning for Image Recognition
|
34.30
|
2015-12-10
|
|
InceptionV4
|
Inception-v4, Inception-ResNet and the Impact of …
|
32.30
|
2016-02-23
|
|
MLP-Mixer
|
MLP-Mixer: An all-MLP Architecture for Vision
|
32.20
|
2021-05-04
|
|
Manifold
|
Manifold Mixup: Better Representations by Interpo…
|
31.60
|
2018-06-13
|
|
CutMix
|
CutMix: Regularization Strategy to Train Strong C…
|
31.10
|
2019-05-13
|
|
ReLabel
|
Re-labeling ImageNet: from Single to Multi-Labels…
|
30.80
|
2021-01-13
|
|
MAE
|
Masked Autoencoders Are Scalable Vision Learners
|
30.60
|
2021-11-11
|
|
BeiT
|
BEiT: BERT Pre-Training of Image Transformers
|
30.10
|
2021-06-15
|
|