innvestigate
A toolbox to iNNvestigate neural networks' predictions!
Science Score: 59.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, sciencedirect.com, plos.org -
✓Committers with academic emails
7 of 20 committers (35.0%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.5%) to scientific vocabulary
Keywords from Contributors
Repository
A toolbox to iNNvestigate neural networks' predictions!
Basic Info
Statistics
- Stars: 1,302
- Watchers: 33
- Forks: 234
- Open Issues: 61
- Releases: 6
Metadata Files
README.md
iNNvestigate neural networks!
[](https://innvestigate.readthedocs.io/en/latest/) [](https://github.com/albermax/innvestigate/actions/workflows/ci.yml) [](https://twitter.com/intent/tweet?text=iNNvestigate%20neural%20networks!&url=https://github.com/albermax/innvestigate&hashtags=iNNvestigate,artificialintelligence,machinelearning,deeplearning,datascience) [](https://pypi.org/project/innvestigate/) [](https://github.com/albermax/innvestigate/tags) [](https://github.com/albermax/innvestigate/blob/master/LICENSE) [](https://github.com/psf/black) [](https://badge.fury.io/py/innvestigate) [](https://github.com/albermax/innvestigate) Table of contents
Introduction
In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the attempt to understand neural networks better several methods were proposed, e.g., Saliency, Deconvnet, GuidedBackprop, SmoothGrad, IntegratedGradients, LRP, PatternNet and PatternAttribution. Due to the lack of a reference implementations comparing them is a major effort. This library addresses this by providing a common interface and out-of-the-box implementation for many analysis methods. Our goal is to make analyzing neural networks' predictions easy!
If you use this code please star the repository and cite the following paper:
Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K. R., Dähne, S., & Kindermans, P. J. (2019). iNNvestigate neural networks! Journal of Machine Learning Research, 20.
@article{JMLR:v20:18-540,
author = {Maximilian Alber and Sebastian Lapuschkin and Philipp Seegerer and Miriam H{{\"a}}gele and Kristof T. Sch{{\"u}}tt and Gr{{\'e}}goire Montavon and Wojciech Samek and Klaus-Robert M{{\"u}}ller and Sven D{{\"a}}hne and Pieter-Jan Kindermans},
title = {iNNvestigate Neural Networks!},
journal = {Journal of Machine Learning Research},
year = {2019},
volume = {20},
number = {93},
pages = {1-8},
url = {http://jmlr.org/papers/v20/18-540.html}
}
Installation
iNNvestigate is based on Keras and TensorFlow 2 and can be installed with the following commands:
bash
pip install innvestigate
Please note that iNNvestigate currently requires disabling TF2's eager execution.
To use the example scripts and notebooks one additionally needs to install the package matplotlib:
bash
pip install matplotlib
The library's tests can be executed via pytest. The easiest way to do reproducible development on iNNvestigate is to install all dev dependencies via Poetry:
```bash
git clone https://github.com/albermax/innvestigate.git
cd innvestigate
poetry install poetry run pytest ```
Usage and Examples
The iNNvestigate library contains implementations for the following methods:
- function:
- gradient: The gradient of the output neuron with respect to the input.
- smoothgrad: SmoothGrad averages the gradient over number of inputs with added noise.
- signal:
- deconvnet: DeConvNet applies a ReLU in the gradient computation instead of the gradient of a ReLU.
- guided: Guided BackProp applies a ReLU in the gradient computation additionally to the gradient of a ReLU.
- pattern.net: PatternNet estimates the input signal of the output neuron. (Note: not available in iNNvestigate 2.0)
- attribution:
- inputtgradient: Input * Gradient
- deep_taylor[.bounded]: DeepTaylor computes for each neuron a root point, that is close to the input, but which's output value is 0, and uses this difference to estimate the attribution of each neuron recursively.
- lrp.*: LRP attributes recursively to each neuron's input relevance proportional to its contribution of the neuron output.
- integrated_gradients: IntegratedGradients integrates the gradient along a path from the input to a reference.
- miscellaneous:
- input: Returns the input.
- random: Returns random Gaussian noise.
The intention behind iNNvestigate is to make it easy to use analysis methods, but it is not to explain the underlying concepts and assumptions. Please, read the according publication(s) when using a certain method and when publishing please cite the according paper(s) (as well as the iNNvestigate paper). Thank you!
All the available methods have in common that they try to analyze the output of a specific neuron with respect to input to the neural network. Typically one analyses the neuron with the largest activation in the output layer. For example, given a Keras model, one can create a 'gradient' analyzer:
```python import tensorflow as tf import innvestigate tf.compat.v1.disableeagerexecution()
model = createkerasmodel()
analyzer = innvestigate.create_analyzer("gradient", model) ```
and analyze the influence of the neural network's input on the output neuron by:
python
analysis = analyzer.analyze(inputs)
To analyze a neuron with the index i, one can use the following scheme:
python
analyzer = innvestigate.create_analyzer("gradient",
model,
neuron_selection_mode="index")
analysis = analyzer.analyze(inputs, i)
Let's look at an example (code) with VGG16 and this image:

```python import tensorflow as tf import tensorflow.keras.applications.vgg16 as vgg16 tf.compat.v1.disableeagerexecution()
import innvestigate
Get model
model, preprocess = vgg16.VGG16(), vgg16.preprocess_input
Strip softmax layer
model = innvestigate.modelwosoftmax(model)
Create analyzer
analyzer = innvestigate.createanalyzer("deeptaylor", model)
Add batch axis and preprocess
x = preprocess(image[None])
Apply analyzer w.r.t. maximum activated output-neuron
a = analyzer.analyze(x)
Aggregate along color channels and normalize to [-1, 1]
a = a.sum(axis=np.argmax(np.asarray(a.shape) == 3)) a /= np.max(np.abs(a))
Plot
plt.imshow(a[0], cmap="seismic", clim=(-1, 1)) ```

Tutorials
In the directory examples one can find different examples as Python scripts and as Jupyter notebooks:
- Introduction to iNNvestigate: shows how to use iNNvestigate.
- Comparing methods on MNIST: shows how to train and compare analyzers on MNIST.
- Comparing output neurons on MNIST: shows how to analyze the prediction of different classes on MNIST.
- Comparing methods on ImageNet: shows how to compare analyzers on ImageNet.
- Comparing networks on ImageNet: shows how to compare analyzes for different networks on ImageNet.
- Sentiment Analysis.
- Development with iNNvestigate: shows how to develop with iNNvestigate.
* Perturbation Analysis.
To use the ImageNet examples please download the example images first (script).
More documentation
... can be found here:
- Alber, M., Lapuschkin, S., Seegerer, P., Hägele, M., Schütt, K. T., Montavon, G., Samek, W., Müller, K. R., Dähne, S., & Kindermans, P. J. (2019). INNvestigate neural networks! Journal of Machine Learning Research, 20.](https://jmlr.org/papers/v20/18-540.html)
@article{JMLR:v20:18-540, author = {Maximilian Alber and Sebastian Lapuschkin and Philipp Seegerer and Miriam H{{\"a}}gele and Kristof T. Sch{{\"u}}tt and Gr{{\'e}}goire Montavon and Wojciech Samek and Klaus-Robert M{{\"u}}ller and Sven D{{\"a}}hne and Pieter-Jan Kindermans}, title = {iNNvestigate Neural Networks!}, journal = {Journal of Machine Learning Research}, year = {2019}, volume = {20}, number = {93}, pages = {1-8}, url = {http://jmlr.org/papers/v20/18-540.html} } - https://innvestigate.readthedocs.io/en/latest/
Contributing
If you would like to contribute or add your analysis method please open an issue or submit a pull request.
Releases
Acknowledgements
Adrian Hill acknowledges support by the Federal Ministry of Education and Research (BMBF) for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A).
Owner
- Name: Maximilian Alber
- Login: albermax
- Kind: user
- Location: Berlin
- Company: TU Berlin
- Repositories: 1
- Profile: https://github.com/albermax
GitHub Events
Total
- Issues event: 1
- Watch event: 43
- Push event: 4
- Pull request event: 1
- Fork event: 3
- Create event: 1
Last Year
- Issues event: 1
- Watch event: 43
- Push event: 4
- Pull request event: 1
- Fork event: 3
- Create event: 1
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Max Alber | a****n@g****m | 367 |
| Sebastian Roland Lapuschkin | s****n@h****e | 218 |
| Adrian Hill | a****l@m****g | 216 |
| Philipp Seegerer | p****r@t****e | 52 |
| ¨Maximilian | ¨****n@g****¨ | 24 |
| Miriam Hägele | h****e@t****e | 22 |
| enryh | h****l@c****k | 15 |
| heytitle | p****i@g****m | 12 |
| MiriamHaegele | h****m@g****m | 4 |
| Av Shrikumar | a****r@g****m | 3 |
| nicemanis | n****s@g****m | 3 |
| Kristof T. Schütt | k****t@g****m | 2 |
| Ruben | 5****o | 2 |
| enryh | h****l@s****k | 2 |
| jurgyy | j****s@o****m | 2 |
| Jan Maces | j****s@n****m | 2 |
| Alex Binder | a****r@s****g | 1 |
| Leander Weber | l****r@h****e | 1 |
| Paul K. Gerke | p****t@g****m | 1 |
| Simon M. Hofmann | S****r | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 8 months ago
All Time
- Total issues: 86
- Total pull requests: 32
- Average time to close issues: about 1 year
- Average time to close pull requests: about 1 month
- Total issue authors: 61
- Total pull request authors: 4
- Average comments per issue: 2.44
- Average comments per pull request: 1.44
- Merged pull requests: 16
- Bot issues: 0
- Bot pull requests: 12
Past Year
- Issues: 3
- Pull requests: 1
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 3
- Pull request authors: 1
- Average comments per issue: 0.0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- leanderweber (12)
- adrhill (5)
- albermax (3)
- HugoTex98 (3)
- mg97tud (2)
- tszoldra (2)
- SHEscher (2)
- Yung-zi (2)
- nkoenen (2)
- diptiSH (1)
- picciama (1)
- atifkhanncl (1)
- juliowissing-iis (1)
- Lilithrrm (1)
- jeremy-wendt (1)
Pull Request Authors
- adrhill (17)
- dependabot[bot] (12)
- Rubinjo (2)
- avishekchy45 (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 467 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 24
(may contain duplicates) - Total versions: 9
- Total maintainers: 2
pypi.org: innvestigate
A toolbox to innvestigate neural networks' predictions.
- Homepage: https://github.com/albermax/innvestigate
- Documentation: https://innvestigate.readthedocs.io/en/latest/
- License: BSD-2-Clause
-
Latest release: 2.1.2
published over 2 years ago
Rankings
proxy.golang.org: github.com/albermax/innvestigate
- Documentation: https://pkg.go.dev/github.com/albermax/innvestigate#section-documentation
- License: other
-
Latest release: v1.0.3
published over 7 years ago
Rankings
Dependencies
- actions/cache v2 composite
- actions/checkout v3 composite
- actions/checkout v2 composite
- actions/setup-python v4 composite
- actions/setup-python v2 composite
- snok/install-poetry v1 composite
- actions/cache v2 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- codecov/codecov-action v1 composite
- snok/install-poetry v1 composite
- MonkeyType ^22.2.0 develop
- Pillow ^9.0.0 develop
- Sphinx ^6.1.1 develop
- black ^22.3 develop
- codecov ^2.1.11 develop
- coverage ^7.0.3 develop
- ftfy ^6.1.1 develop
- ipykernel ^6.19.4 develop
- isort ^5.10.1 develop
- mypy ^0.991 develop
- pandas ^1.3 develop
- pre-commit ^2.19.0 develop
- pylint ^2.12.2 develop
- pytest ^7.2.0 develop
- pytest-cov ^4.0.0 develop
- pyupgrade ^3.3.1 develop
- rope ^1.6.0 develop
- ruff ^0.0.212 develop
- vulture ^2.3 develop
- future ^0.18.2
- matplotlib ^3.5.1
- numpy ^1.22
- python >=3.8,<3.11
- tensorflow >=2.6,<2.12