Predicting BCF values with explanations
-
Updated
Feb 13, 2023 - Python
Predicting BCF values with explanations
Gated-ViGAT. Code and data for our paper: N. Gkalelis, D. Daskalakis, V. Mezaris, "Gated-ViGAT: Efficient bottom-up event recognition and explanation using a new frame selection policy and gating mechanism", IEEE International Symposium on Multimedia (ISM), Naples, Italy, Dec. 2022.
Can we use the explanation to improve hate speech detection ? Our paper accepted in ECAI 2023 explores this idea by introducing a new architecture for this purpose.
A method for conditional shapley value estimation, built off the shapr package: https://github.com/NorskRegnesentral/shapr/tree/master
Data and codes for the EMNLP 2023 paper 'LLMs – the Good, the Bad or the Indispensable?: A Use Case on Legal Statute Prediction and Legal Judgment Prediction on Indian Court Cases'
Official Implementation of TMLR's paper: "TabCBM: Concept-based Interpretable Neural Networks for Tabular Data"
Explanations gain more and more attention in the context of explainable systems. This repo contains research about different aspects of explanations and how to integrate them in an existing system.
Implementation of Concept-level Debugging of Part-Prototype Networks
code for paper in NLDB'23
End-to-end ML: Using ML to generate expected PPS, opportunity grade classification, and prescription analysis for basketball coaches.
In this repository you will fine explainability of machine learning models.
Getting the Anchors Explainer to work in Different Settings
A Python library for creating TOPSIS rankings and visualizing the alternatives in WMSD for interpretation and designing improvement actions
Benchmark to Evaluate EXplainable AI
Análisis de modelos de Deep Learning mediante SHAP values. Desarrollo y programación de herramientas que permitan interpretar modelos de Deep Learning usando las SHAP values, generando una mejor explicación de los factores en los que se basa el modelo a la hora de tomar sus predicciones.
The code for the paper "The goal of explaining black boxes in EEG seizure prediction is not to explain models’ decisions", published in Epilepsia Open (https://doi.org/10.1002/epi4.12748). It concerns explainability methods on Machine Learning for EEG seizure prediction.
Training and inference code for text classification models
Explain your 🤗 transformers without effort! Plot the internal behavior of your model.
Code for the Nadaraya-Watson Head - an interpretable/explainable, nonparametric classification head which can be used with any neural network
Add a description, image, and links to the explainability topic page so that developers can more easily learn about it.
To associate your repository with the explainability topic, visit your repo's landing page and select "manage topics."