Interpretability for sequence generation models 🐛 🔍
-
Updated
May 11, 2024 - Python
Interpretability for sequence generation models 🐛 🔍
Explainable AI in Julia.
Code for our ICML '19 paper: Neural Network Attributions: A Causal Perspective.
On Explaining Your Explanations of BERT: An Empirical Study with Sequence Classification
Attribution (or visual explanation) methods for understanding video classification networks. Demo codes for WACV2021 paper: Towards Visually Explaining Video Understanding Networks with Perturbation.
Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.
surrogate quantitative interpretability for deepnets
Metrics for evaluating interpretability methods.
Hacking SetFit so that it works with integrated gradients.
squid repository for manuscript analysis
Code for our AISTATS '22 paper: Improving Attribution Methods by Learning Submodular Functions.
The source code for the journal paper: Spatio-Temporal Perturbations for Video Attribution, TCSVT-2021
Add a description, image, and links to the attribution-methods topic page so that developers can more easily learn about it.
To associate your repository with the attribution-methods topic, visit your repo's landing page and select "manage topics."