tf-keras-vis

Neural network visualization toolkit for tf.keras

https://github.com/keisen/tf-keras-vis

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.9%) to scientific vocabulary

Keywords

activation-maximization deep-learning explainability explainable-ai explainable-ml grad-cam gradcam gradcam-plus-plus keras keras-vis keras-visualization python saliency saliency-maps score-cam tensorflow visualization xai xai-library
Last synced: 4 months ago · JSON representation ·

Repository

Neural network visualization toolkit for tf.keras

Basic Info
Statistics
  • Stars: 329
  • Watchers: 7
  • Forks: 45
  • Open Issues: 34
  • Releases: 30
Topics
activation-maximization deep-learning explainability explainable-ai explainable-ml grad-cam gradcam gradcam-plus-plus keras keras-vis keras-visualization python saliency saliency-maps score-cam tensorflow visualization xai xai-library
Created about 6 years ago · Last pushed 10 months ago
Metadata Files
Readme License Citation

README.md

tf-keras-vis

Downloads Python PyPI version Python package License: MIT Documentation

<!-- sec.1 -->

Web documents

https://keisen.github.io/tf-keras-vis-docs/

Overview

tf-keras-vis is a visualization toolkit for debugging keras.Model in Tensorflow2.0+. Currently supported methods for visualization include:

tf-keras-vis is designed to be light-weight, flexible and ease of use. All visualizations have the features as follows:

  • Support N-dim image inputs, that's, not only support pictures but also such as 3D images.
  • Support batch wise processing, so, be able to efficiently process multiple input images.
  • Support the model that have either multiple inputs or multiple outputs, or both.
  • Support the mixed-precision model.

And in ActivationMaximization,

  • Support Optimizers that are built to keras.

Visualizations

Dense Unit

Convolutional Filter

Class Activation Map

The images above are generated by GradCAM++.

Saliency Map

The images above are generated by SmoothGrad.

Usage

ActivationMaximization (Visualizing Convolutional Filter)

```python import tensorflow as tf from keras.applications import VGG16 from matplotlib import pyplot as plt from tfkerasvis.activationmaximization import ActivationMaximization from tfkerasvis.activationmaximization.callbacks import Progress from tfkerasvis.activationmaximization.inputmodifiers import Jitter, Rotate2D from tfkerasvis.activationmaximization.regularizers import TotalVariation2D, Norm from tfkerasvis.utils.modelmodifiers import ExtractIntermediateLayer, ReplaceToLinear from tfkerasvis.utils.scores import CategoricalScore

Create the visualization instance.

All visualization classes accept a model and model-modifier, which, for example,

replaces the activation of last layer to linear function so on, in constructor.

activationmaximization = \ ActivationMaximization(VGG16(), modelmodifier=[ExtractIntermediateLayer('block5_conv3'), ReplaceToLinear()], clone=False)

You can use Score class to specify visualizing target you want.

And add regularizers or input-modifiers as needed.

activations = \ activationmaximization(CategoricalScore(FILTERINDEX), steps=200, input_modifiers=[Jitter(jitter=16), Rotate2D(degree=1)], regularizers=[TotalVariation2D(weight=1.0), Norm(weight=0.3, p=1)], optimizer=keras.optimizers.RMSprop(1.0, 0.999), callbacks=[Progress()])

Since v0.6.0, calling astype() is NOT necessary.

activations = activations[0].astype(np.uint8)

Render

plt.imshow(activations[0]) ```

Gradcam++

```python import numpy as np from matplotlib import pyplot as plt from matplotlib import cm from tfkerasvis.gradcamplusplus import GradcamPlusPlus from tfkerasvis.utils.modelmodifiers import ReplaceToLinear from tfkeras_vis.utils.scores import CategoricalScore

Create GradCAM++ object

gradcam = GradcamPlusPlus(YOURMODELINSTANCE, model_modifier=ReplaceToLinear(), clone=True)

Generate cam with GradCAM++

cam = gradcam(CategoricalScore(CATEGORICALINDEX), SEEDINPUT)

Since v0.6.0, calling normalize() is NOT necessary.

cam = normalize(cam)

plt.imshow(SEEDINPUTIMAGE) heatmap = np.uint8(cm.jet(cam[0])[..., :3] * 255) plt.imshow(heatmap, cmap='jet', alpha=0.5) # overlay ```

Please see the guides below for more details:

Getting Started Guides

[NOTES] If you have ever used keras-vis, you may feel that tf-keras-vis is similar with keras-vis. Actually tf-keras-vis derived from keras-vis, and both provided visualization methods are almost the same. But please notice that tf-keras-vis APIs does NOT have compatibility with keras-vis.

Requirements

  • Python 3.7+
  • Tensorflow 2.0+

Installation

  • PyPI

bash $ pip install tf-keras-vis tensorflow

  • Source (for development)

bash $ git clone https://github.com/keisen/tf-keras-vis.git $ cd tf-keras-vis $ pip install -e .[develop] tensorflow

Use Cases

  • chitra
    • A Deep Learning Computer Vision library for easy data loading, model building and model interpretation with GradCAM/GradCAM++.

Known Issues

  • With InceptionV3, ActivationMaximization doesn't work well, that's, it might generate meaninglessly blur image.
  • With cascading model, Gradcam and Gradcam++ don't work well, that's, it might occur some error. So we recommend to use FasterScoreCAM in this case.
  • channels-first models and data is unsupported.

ToDo

  • Guides
    • Visualizing multiple attention or activation images at once utilizing batch-system of model
    • Define various score functions
    • Visualizing attentions with multiple inputs models
    • Visualizing attentions with multiple outputs models
    • Advanced score functions
    • Tuning Activation Maximization
    • Visualizing attentions for N-dim image inputs
  • We're going to add some methods such as below
    • Deep Dream
    • Style transfer

Owner

  • Name: Yasuhiro Kubota
  • Login: keisen
  • Kind: user
  • Location: Tokyo, Japan

Software engineering, Program language, WebRTC, Machine Learning

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: Kubota
    given-names: Yasuhiro
    email: "k.keisen@gmail.com"
title: "tf-keras-vis"
repository: "https://github.com/keisen/tf-keras-vis"
url: "https://keisen.github.io/tf-keras-vis-docs/"
type: software
version: 0.8.8
date-released: "2024-04-17"
license-url: "https://github.com/keisen/tf-keras-vis/blob/master/LICENSE"
references:
  - authors:
      - family-names: Kotikalapudi
        given-names: Raghavendra
    title: "keras-vis"
    repository: "https://github.com/raghakot/keras-vis"
    url: "https://raghakot.github.io/keras-vis/"
    type: software
    version: 0.4.1

GitHub Events

Total
  • Watch event: 14
  • Push event: 4
  • Fork event: 2
Last Year
  • Watch event: 14
  • Push event: 4
  • Fork event: 2

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 375
  • Total Committers: 3
  • Avg Commits per committer: 125.0
  • Development Distribution Score (DDS): 0.005
Past Year
  • Commits: 6
  • Committers: 1
  • Avg Commits per committer: 6.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
keisen k****n@g****m 373
dohyoung rim d****m@h****t 1
NuM314 a****i@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 74
  • Total pull requests: 28
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 16 days
  • Total issue authors: 55
  • Total pull request authors: 5
  • Average comments per issue: 2.96
  • Average comments per pull request: 1.5
  • Merged pull requests: 25
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • keisen (9)
  • bersbersbers (6)
  • omerbrandis (3)
  • miguelCalado (2)
  • eyaler (2)
  • ianbgroves (2)
  • xBorja042 (2)
  • fabianostermann (1)
  • phitoduck (1)
  • SergioG-M (1)
  • IsmailAlaouiAbdellaoui (1)
  • Mah-SP (1)
  • albths (1)
  • marieff587 (1)
  • roma-glushko (1)
Pull Request Authors
  • keisen (24)
  • Lakshay-13 (1)
  • dhrim (1)
  • NuM314 (1)
  • srwi (1)
Top Labels
Issue Labels
bug (9) enhancement (8) help wanted (4) question (2) documentation (1)
Pull Request Labels
bug (11) enhancement (8) documentation (3) hotfix (1)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 7,935 last-month
  • Total docker downloads: 393
  • Total dependent packages: 0
  • Total dependent repositories: 18
  • Total versions: 30
  • Total maintainers: 1
pypi.org: tf-keras-vis

Neural network visualization toolkit for tf.keras

  • Versions: 30
  • Dependent Packages: 0
  • Dependent Repositories: 18
  • Downloads: 7,935 Last month
  • Docker Downloads: 393
Rankings
Docker downloads count: 2.4%
Dependent repos count: 3.4%
Stargazers count: 3.7%
Downloads: 4.7%
Average: 5.1%
Forks count: 6.0%
Dependent packages count: 10.1%
Maintainers (1)
Last synced: 5 months ago

Dependencies

setup.py pypi
  • deprecated *
  • imageio *
  • importlib-metadata *
  • packaging *
  • pillow *
  • scipy *
.github/workflows/python-package-cron.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/python-package-up-to-TF2.5.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/python-package.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/python-publish-conda.yml actions
  • actions/checkout v3 composite
  • fcakyon/conda-publish-action v1.3 composite
.github/workflows/python-publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/sphinx-publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite