tf-explain

Interpretability Methods for tf.keras models with Tensorflow 2.x

https://github.com/sicara/tf-explain

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, plos.org, zenodo.org
  • Committers with academic emails
    1 of 18 committers (5.6%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.0%) to scientific vocabulary

Keywords

deep-learning interpretability keras machine-learning tensorflow tf2 visualization
Last synced: 4 months ago · JSON representation ·

Repository

Interpretability Methods for tf.keras models with Tensorflow 2.x

Basic Info
Statistics
  • Stars: 1,029
  • Watchers: 47
  • Forks: 110
  • Open Issues: 47
  • Releases: 7
Topics
deep-learning interpretability keras machine-learning tensorflow tf2 visualization
Created over 6 years ago · Last pushed over 1 year ago
Metadata Files
Readme Contributing License Citation

README.md

tf-explain

Pypi Version DOI Build Status Documentation Status Python Versions Tensorflow Versions

tf-explain implements interpretability methods as Tensorflow 2.x callbacks to ease neural network's understanding. See Introducing tf-explain, Interpretability for Tensorflow 2.0

Documentation: https://tf-explain.readthedocs.io

Installation

tf-explain is available on PyPi. To install it:

bash virtualenv venv -p python3.8 pip install tf-explain

tf-explain is compatible with Tensorflow 2.x. It is not declared as a dependency to let you choose between full and standalone-CPU versions. Additionally to the previous install, run:

```bash

For CPU or GPU

pip install tensorflow==2.6.0 Opencv is also a dependency. To install it, run: bash

For CPU or GPU

pip install opencv-python ```

Quickstart

tf-explain offers 2 ways to apply interpretability methods. The full list of methods is the Available Methods section.

On trained model

The best option is probably to load a trained model and apply the methods on it.

```python

Load pretrained model or your own

model = tf.keras.applications.vgg16.VGG16(weights="imagenet", include_top=True)

Load a sample image (or multiple ones)

img = tf.keras.preprocessing.image.loadimg(IMAGEPATH, targetsize=(224, 224)) img = tf.keras.preprocessing.image.imgto_array(img) data = ([img], None)

Start explainer

explainer = GradCAM() grid = explainer.explain(data, model, class_index=281) # 281 is the tabby cat index in ImageNet

explainer.save(grid, ".", "grad_cam.png") ```

During training

If you want to follow your model during the training, you can also use it as a Keras Callback, and see the results directly in TensorBoard.

```python from tfexplain.callbacks.gradcam import GradCAMCallback

model = [...]

callbacks = [ GradCAMCallback( validationdata=(xval, yval), classindex=0, outputdir=outputdir, ) ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Available Methods

  1. Activations Visualization
  2. Vanilla Gradients
  3. Gradients*Inputs
  4. Occlusion Sensitivity
  5. Grad CAM (Class Activation Maps)
  6. SmoothGrad
  7. Integrated Gradients

Activations Visualization

Visualize how a given input comes out of a specific activation layer

```python from tfexplain.callbacks.activationsvisualization import ActivationsVisualizationCallback

model = [...]

callbacks = [ ActivationsVisualizationCallback( validationdata=(xval, yval), layersname=["activation1"], outputdir=output_dir, ), ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Vanilla Gradients

Visualize gradients importance on input image

```python from tfexplain.callbacks.vanillagradients import VanillaGradientsCallback

model = [...]

callbacks = [ VanillaGradientsCallback( validationdata=(xval, yval), classindex=0, outputdir=outputdir, ), ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Gradients*Inputs

Variant of Vanilla Gradients ponderating gradients with input values

```python from tfexplain.callbacks.gradientsinputs import GradientsInputsCallback

model = [...]

callbacks = [ GradientsInputsCallback( validationdata=(xval, yval), classindex=0, outputdir=outputdir, ), ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Occlusion Sensitivity

Visualize how parts of the image affects neural network's confidence by occluding parts iteratively

```python from tfexplain.callbacks.occlusionsensitivity import OcclusionSensitivityCallback

model = [...]

callbacks = [ OcclusionSensitivityCallback( validationdata=(xval, yval), classindex=0, patchsize=4, outputdir=output_dir, ), ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Occlusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)

Grad CAM

Visualize how parts of the image affects neural network's output by looking into the activation maps

From Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization

```python from tfexplain.callbacks.gradcam import GradCAMCallback

model = [...]

callbacks = [ GradCAMCallback( validationdata=(xval, yval), classindex=0, outputdir=outputdir, ) ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

SmoothGrad

Visualize stabilized gradients on the inputs towards the decision

From SmoothGrad: removing noise by adding noise

```python from tf_explain.callbacks.smoothgrad import SmoothGradCallback

model = [...]

callbacks = [ SmoothGradCallback( validationdata=(xval, yval), classindex=0, numsamples=20, noise=1., outputdir=output_dir, ) ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Integrated Gradients

Visualize an average of the gradients along the construction of the input towards the decision

From Axiomatic Attribution for Deep Networks

```python from tfexplain.callbacks.integratedgradients import IntegratedGradientsCallback

model = [...]

callbacks = [ IntegratedGradientsCallback( validationdata=(xval, yval), classindex=0, nsteps=20, outputdir=output_dir, ) ]

model.fit(xtrain, ytrain, batch_size=2, epochs=2, callbacks=callbacks) ```

Roadmap

Contributing

To contribute to the project, please read the dedicated section.

Citation

A citation file is available for citing this work. Click the "Cite this repository" button on the right-side panel of Github to get a BibTeX-ready citation.

Owner

  • Name: Sicara
  • Login: sicara
  • Kind: organization
  • Email: contact@sicara.com
  • Location: Paris, France

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: tf-explain
abstract: Interpretability Methods for tf.keras models with TensorFlow 2.x
doi: 10.5281/zenodo.5711704
version: 0.3.1
date-released: 2021-02-04
message: "If you use tf-explain in your research, please cite it using these metadata."
type: software
repository-code: "https://github.com/sicara/tf-explain"
authors:
  - given-names: Raphael
    family-names: Meudec
    email: raphael.meudec@inria.fr
    affiliation: INRIA Parietal
    orcid: 'https://orcid.org/0000-0001-9970-5745'

GitHub Events

Total
  • Issues event: 3
  • Watch event: 16
  • Issue comment event: 2
Last Year
  • Issues event: 3
  • Watch event: 16
  • Issue comment event: 2

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 171
  • Total Committers: 18
  • Avg Commits per committer: 9.5
  • Development Distribution Score (DDS): 0.14
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Raphael Meudec r****m@s****m 147
Raphael Meudec r****c@i****r 6
ywolff y****w@s****m 2
Nicolas Jean n****3@g****m 2
laurent montier l****r@g****m 1
jpsimen 6****n 1
ghazalee70 g****i@h****m 1
boussoffara b****a@m****m 1
andife f****r@a****e 1
Zach 2****l 1
Toubi a****t@s****m 1
Tauranis T****s 1
Manuel Romero m****8@g****m 1
Luke Wood L****d 1
Guillermo Sebastián Donatti 4****i 1
Chandra S S Vamsi u****i@g****m 1
Alex Kubiesa a****a@o****m 1
twsl 4****I 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 74
  • Total pull requests: 31
  • Average time to close issues: 4 months
  • Average time to close pull requests: about 1 month
  • Total issue authors: 58
  • Total pull request authors: 14
  • Average comments per issue: 2.76
  • Average comments per pull request: 0.55
  • Merged pull requests: 24
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 3
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 2
  • Pull request authors: 0
  • Average comments per issue: 0.33
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • RaphaelMeudec (10)
  • matheushent (3)
  • palatos (2)
  • Meywether (2)
  • tombinic (2)
  • r0cketr1kky (2)
  • rao208 (2)
  • DLyzhou (1)
  • vscv (1)
  • TedGraham (1)
  • Hassanfarooq92 (1)
  • mohamedabdallah1996 (1)
  • transcranial (1)
  • Cospel (1)
  • mubeenmeo344 (1)
Pull Request Authors
  • RaphaelMeudec (17)
  • J-Olejnik (2)
  • boussoffara (2)
  • sdonatti (1)
  • AlexKubiesa (1)
  • jpsimen (1)
  • Tauranis (1)
  • craymichael (1)
  • ghazalee70 (1)
  • VamsiUCSS (1)
  • andife (1)
  • TeodorChiaburu (1)
  • hnurxn (1)
  • LukeWood (1)
Top Labels
Issue Labels
improvement (9) bug (6) question (3) v1.0.0 (3) tensorflow limited (2) good first issue (2) enhancement (1) waiting explanations (1)
Pull Request Labels

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 8,572 last-month
  • Total docker downloads: 102
  • Total dependent packages: 5
    (may contain duplicates)
  • Total dependent repositories: 43
    (may contain duplicates)
  • Total versions: 13
  • Total maintainers: 3
pypi.org: tf-explain

Interpretability Callbacks for Tensorflow 2.0

  • Versions: 7
  • Dependent Packages: 5
  • Dependent Repositories: 43
  • Downloads: 8,572 Last month
  • Docker Downloads: 102
Rankings
Dependent packages count: 1.6%
Stargazers count: 2.0%
Dependent repos count: 2.2%
Average: 3.2%
Docker downloads count: 3.7%
Forks count: 4.4%
Downloads: 5.1%
Last synced: 4 months ago
proxy.golang.org: github.com/sicara/tf-explain
  • Versions: 6
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.5%
Average: 5.7%
Dependent repos count: 5.8%
Last synced: 4 months ago

Dependencies

docs/requirements.txt pypi
  • sphinx-rtd-theme ==0.4.3
.github/workflows/ci.yml actions
  • actions/cache v1 composite
  • actions/checkout v2 composite
  • actions/setup-python v1 composite
setup.py pypi