Skip to content

Strategies to interpret Deep Learning & Machine Learning models/black box; help us to understand how it’s making predictions/decisions.

Notifications You must be signed in to change notification settings

suneelbvs/ExplainableAI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 

Repository files navigation

Repository for "Practical Case Studies of ExplainableAI(XAi)"

Explainable AI (XAI) is an area of artificial intelligence that focuses on creating models and algorithms that are transparent and understandable by human users. This is important as AI models are being used in more and more critical applications, and the decisions they make can have significant impacts on individuals and society as a whole. The goal of XAI is to ensure that these AI systems are transparent, accountable, and auditable, and that their decisions can be explained in a way that is understandable to end-users.

Comparison of Explainable AI Methods

LIME (Local Interpretable Model-agnostic Explanations): LIME provides explanations for individual predictions by training a simple linear model around the prediction and explaining its predictions in terms of the original features. LIME can be computationally expensive for large datasets and may not work well for complex models. https://github.com/marcotcr/lime

SHAP (SHapley Additive exPlanations): SHAP provides a unified approach to explain the output of any machine learning model. It is based on the concept of Shapley values from cooperative game theory and provides a fair way to distribute the contribution of each feature to the prediction. SHAP can be computationally expensive for large datasets and may not scale well to large numbers of features. https://github.com/slundberg/shap

Anchors Anchors is a model-agnostic explanation method that provides an interpretable description of the rules used by a machine learning model to make predictions. Anchors may not provide a complete explanation for all models and may not work well for highly non-linear models. https://github.com/PAIR-code/Anchors

Surrogate models: Surrogate models are a class of explainable AI methods that replace the original complex model with a simpler, more transparent model. Surrogate models may not accurately capture the behavior of the original model, especially for complex models. N/A

  • In this repo, we will explore and discuss some basic concepts machine learning & deep learning models and strategies to interpret models and their features; as AI/ML models are often black box

About

Strategies to interpret Deep Learning & Machine Learning models/black box; help us to understand how it’s making predictions/decisions.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published