ML Research Wiki / Benchmarks / Interpretability Techniques for Deep Learning / CelebA

CelebA

Interpretability Techniques for Deep Learning Benchmark

Performance Over Time

📊 Showing 7 results | 📏 Metric: Insertion AUC score

Top Performing Models

Rank Model Paper Insertion AUC score Date Code
1 RISE RISE: Randomized Input Sampling for Explanation of Black-box Models 0.57 2018-06-19 📦 openvinotoolkit/datumaro 📦 eclique/RISE 📦 dbash/zerowaste
2 HSIC-Attribution Making Sense of Dependence: Efficient Black-box Explanations Using Dependence Measure 0.57 2022-06-13 📦 paulnovello/hsic-attribution-method
3 Kernel SHAP A Unified Approach to Interpreting Model Predictions 0.52 2017-05-22 📦 slundberg/shap 📦 pytorch/captum 📦 linkedin/fasttreeshap
4 LIME "Why Should I Trust You?": Explaining the Predictions of Any Classifier 0.52 2016-02-16 📦 marcotcr/lime 📦 pytorch/captum 📦 thomasp85/lime
5 Saliency Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps 0.46 2013-12-20 📦 pytorch/captum 📦 MisaOgura/flashtorch 📦 FrancescoSaverioZuppichini/A-journey-into-Convolutional-Neural-Network-visualization-
6 Grad-CAM Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization 0.37 2016-10-07 📦 jacobgil/pytorch-grad-cam 📦 pytorch/captum 📦 frgfm/torch-cam
7 Integrated Gradients Axiomatic Attribution for Deep Networks 0.36 2017-03-04 📦 shap/shap 📦 pytorch/captum 📦 cdpierse/transformers-interpret

All Papers (7)