Skip to content

A toolbox to iNNvestigate neural networks' predictions!

License

Notifications You must be signed in to change notification settings

AlexBinder/innvestigate

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

iNNvestigate neural networks!

Different explanation methods on ImageNet.

Note: The library is alpha-state and you might encounter issues using it. We are working on the next release. Please let us know if you find any bugs.

Introduction

In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the attempt to understand neural networks better several methods were proposed, e.g., Saliency, Deconvnet, GuidedBackprop, SmoothGrad, IntergratedGradients, LRP, PatternNet&-Attribution. Due to the lack of a reference implementations comparing them is a major effort. This library addresses this by providing a common interface and out-of-the-box implementation for many analysis methods. Our goal is to make analyzing neural networks' predictions easy!

If you use this code please star the repository and cite the following paper:

TODO: Add link to SW paper.

Installation

iNNvestigate can be installed with the following commands. The library is based on Keras and therefore requires a supported Keras-backend (Currently only Python 3.5, Tensorflow 1.8 and Cuda 9.x are supported.):

pip install git+https://github.com/albermax/innvestigate
# Installing Keras backend
pip install [tensorflow | theano | cntk]

To use the example scripts and notebooks one additionally needs to install the package matplotlib:

pip install matplotlib

The library's tests can be executed via:

git clone https://github.com/albermax/innvestigate.git
cd innvestigate
python setup.py test

The library was developed and tested on a Linux platform with Python 3.5, Tensorflow 1.8 and Cuda 9.x.

Usage and Examples

The iNNvestigate library contains implementations for the following methods:

All the available methods have in common that they try to analyze the output of a specific neuron with respect to input to the neural network. Typically one analyses the neuron with the largest activation in the output layer. For example, given a Keras model, one can create a 'gradient' analyzer:

import innvestigate

model = create_keras_model()

analyzer = innvestigate.create_analyzer("gradient", model)

and analyze the influence of the neural network's input on the output neuron by:

analysis = analyzer.analyze(inputs)

To analyze a neuron with the index i, one can use the following scheme:

analyzer = innvestigate.create_analyzer("gradient",
                                        model,
					neuron_selection_mode="index")
analysis = analyzer.analyze(inputs, i)

Trainable methods

Some methods like PatternNet and PatternAttribution are data-specific and need to be trained. Given a data set with train and test data, this can be done in the following way:

import innvestigate

analyzer = innvestigate.create_analyzer("pattern.net", model)
analyzer.fit(X_train)
analysis = analyzer.analyze(X_test)

Examples

In the directory examples one can find different examples as Python scripts and as Jupyter notebooks:

  • Imagenet: shows how to use the different methods with VGG16 on ImageNet and how the reproduce the analysis grid above. This example uses pre-trained patterns for PatternNet.
  • MNIST: shows how to train and use analyzers on MNIST.

Contribution

If you would like to add your analysis method please get in touch with us!

Releases

Can be found here.

About

A toolbox to iNNvestigate neural networks' predictions!

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%