Skip to content

blazecolby/PyTorch-LIME

Repository files navigation

Introduction

This repository walks through an example of LIME(Local Interpretable Model-Agnostic Explanations). Orginial LIME paper - https://arxiv.org/abs/1602.04938

LIME provides a means to explain any black-box classifier or regressor.

Models can be difficult to analyze on a global level but may be possible to analyze for a specific instance.

Desirable characterstics of an explainable model:

  • Interpretable
  • Local Fidelity
  • Model-Agnostic
  • Global Perspective

Resources

Utilizes:

About

PyTorch - L.I.M.E ( Feature attribution for deep learning models )

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published