Interpretable machine learning based on Shapley values
-
Updated
Jul 20, 2021 - Python
Interpretable machine learning based on Shapley values
A repository to study the interpretability of time series networks(LSTM)
Deep Classiflie is a framework for developing ML models that bolster fact-checking efficiency. As a POC, the initial alpha release of Deep Classiflie generates/analyzes a model that continuously classifies a single individual's statements (Donald Trump) using a single ground truth labeling source (The Washington Post). For statements the model d…
Accompanying code for the paper "Discrete representations in neural models of spoken language" (https://aclanthology.org/2021.blackboxnlp-1.11)
Demos for visualizing how rule-based models work.
Training and exploration of linear probes into Othello-GPT by Li et al. (2022)
Demonstration of InterpretME, an interpretable machine learning pipeline
📚 Curated list for Causality and AI
a module to obtain diverse real-world-grounded features for sentences for large-scale benchmarking
Optimizing Mind static website v1
A CT-scan of your CNN
A python library to agnostically explain multi-label black-box classifiers (tabular data)
The purpose of this repository is to demonstrate how to use NLP explanation/interpretability tools.
Temporal Attention Bottleneck for VAE is informative? (ICML 2023)
Explain model and feature dependencies by decomposition of SHAP values
An investigation into sequential learning of tasks using feed-forward networks built with Tensorflow
A Python package with explanation methods for extraction of feature interactions from predictive models
Interpretable Error Function learning
Aims to help emergency responders during crises (ASONAM '20)
Techniques for interpreting ConvNets
Add a description, image, and links to the interpretability topic page so that developers can more easily learn about it.
To associate your repository with the interpretability topic, visit your repo's landing page and select "manage topics."