https://github.com/bmi-labmedinfo/signal_grad_cam

SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.

https://github.com/bmi-labmedinfo/signal_grad_cam

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.4%) to scientific vocabulary
Last synced: 6 months ago · JSON representation

Repository

SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.

Basic Info
  • Host: GitHub
  • Owner: bmi-labmedinfo
  • License: mit
  • Language: Python
  • Default Branch: main
  • Size: 10.2 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 12 months ago · Last pushed 10 months ago
Metadata Files
Readme License

README.md

Contributors Forks Stargazers Issues MIT License


SignalGrad-CAM

SignalGrad-CAM aims at generalising Grad-CAM to one-dimensional applications, while enhancing usability and efficiency.

Explore the docs

Report Bug or Request Feature

Table of Contents
  1. About The Project
  2. Installation
  3. Usage
  4. Publications
  5. Contacts And Useful Links
  6. License

About The Project

Deep learning models have demonstrated remarkable performance across various domains; however, their black-box nature hinders interpretability and trust. As a result, the demand for explanation algorithms has grown, driving advancements in the field of eXplainable AI (XAI). However, relatively few efforts have been dedicated to developing interpretability methods for signal-based models. We introduce SignalGrad-CAM (SGrad-CAM), a versatile and efficient interpretability tool that extends the principles of Grad-CAM to both 1D- and 2D-convolutional neural networks for signal processing. SGrad-CAM is designed to interpret models for either image or signal elaboration, supporting both PyTorch and TensorFlow/Keras frameworks, and provides diagnostic and visualization tools to enhance model transparency. The package is also designed for batch processing, ensuring efficiency even for large-scale applications, while maintaining a simple and user-friendly structure.

Keywords: eXplainable AI, explanations, local explanation, fidelity, interpretability, transparency, trustworthy AI, feature importance, saliency maps, CAM, Grad-CAM, black-box, deep learning, CNN, signals, time series

Back To Top

Installation

  1. Make sure you have the latest version of pip installed sh pip install --upgrade pip
  2. Install SignalGrad-CAM through pip sh pip install signal-grad-cam

Back To Top

Usage

Here's a basic example that illustrates SignalGrad-CAM common usage. First, train a classifier on the data or select an already trained model, then instantiate `TorchCamBuilder` (if you are working with a PyTorch model) or `TfCamBuilder` (if the model is built in TensorFlow/Keras). Besides the model, `TorchCamBuilder` requires additional information to function effectively. For example, you may provide a list of class labels, a preprocessing function, or an index indicating which dimension corresponds to time. These attributes allow SignalGrad-CAM to be applied to a wide range of models. The constructor displays a list of available Grad-CAM algorithms for explanation, as well as a list of layers that can be used as target for the algorithm. It also identifies any Sigmoid/Softmax layer, since its presence or absence will slightly change the algorithm's workflow.

```python import numpy as np import torch from signalgradcam import TorchCamBuilder

Load model

model = YourTorchModelConstructor() model.loadstatedict(torch.load("pathtoyourstoredmodel.pt") model.eval()

Introduce useful information

def preprocessfn(signal): signal = torch.fromnumpy(signal).float() # Extra preprocessing: data resizing, reshaping, normalization... return signal class_labels = ["Class 1", "Class 2", "Class 3"]

Define the CAM builder

cambuilder = TorchCamBuilder(model=model, transformfc=preprocessfc, classnames=classlabels, timeaxs=1) ```

Now, you can use the `cam_builder` object to generate class activation maps from a list of input data using the `get_cams` method. You can specify multiple algorithm names, target layers, or target classes as needed. The function's attributes allow users to customize the visualization (e.g., setting axis ticks or labels). If a result directory path is provided, the output is stored as a '.png' file; otherwise, it is displayed. In all cases, the function returns a dictionary containing the requested CAMs, along with the model's predictions and importance score ranges. Finally, several visualization tools are available to gain deeper insights into the model's behavior. The display can be customized by adjusting line width, point extension, aspect ratio, and more: * `single_channel_output_display` plots the selected channels using a color scheme that reflects the importance of each input feature. * `overlapped_output_display` superimposes CAMs onto the corresponding input in an image-like format, allowing users to capture the overall distribution of input importance.

```python

Prepare data

datalist = [x for x in yournumpydatax[:2]] datalabelslist = [1, 0] itemnames = ["Item 1", "Item 2"] targetclasses = [0, 1]

Create CAMs

camdict, predictedprobsdict, scorerangesdict = cambuilder.getcam(datalist=datalist, datalabels=datalabelslist, targetclasses=targetclasses, explainertypes="Grad-CAM", targetlayer="conv1dlayer1", softmaxfinal=True, datasamplingfreq=25, dt=1, axesnames=("Time (s)", "Channels"))

Visualize single channel importance

selectedchannelsindices = [0, 2, 10] cambuilder.singlechanneloutputdisplay(datalist=datalist, datalabels=datalabelslist, predictedprobsdict=predictedprobsdict, camsdict=camdict, explainertypes="Grad-CAM", targetclasses=targetclasses, targetlayers="targetlayername", desiredchannels=selectedchannelsindices, gridinstructions=(1, len(selectedchannelsindices), barranges=scorerangesdict, resultsdir="pathtoyourresultdirectoory", datasamplingfreq=25, dt=1, linewidth=0.5, axes_names=("Time (s)", "Amplitude (mV)"))

Visualize overall importance

cambuilder.overlappedoutputdisplay(datalist=datalist, datalabels=datalabelslist, predictedprobsdict=predictedprobsdict, camsdict=camdict, explainertypes="Grad-CAM", targetclasses=targetclasses, targetlayers="targetlayername", figsize=(20 * len(yourdataX), 20), gridinstructions=(len(yourdataX), 1), barranges=scorerangesdict, datanames=itemnames resultsdir="pathtoyourresultdirectoory", datasamplingfreq=25, dt=1) ```

You can also check the python scripts here.

See the open issues for a full list of proposed features (and known issues).

Back To Top

If you use the SignalGrad-CAM software for your projects, please cite it as:

@software{Pe_SignalGrad_CAM_2025, author = {Pe, Samuele and Buonocore, Tommaso Mario and Giovanna, Nicora and Enea, Parimbelli}, title = {{SignalGrad-CAM}}, url = {https://github.com/bmi-labmedinfo/signal_grad_cam}, version = {0.0.1}, year = {2025} }

Back To Top

Contacts and Useful Links

Back To Top

License

Distributed under MIT License. See LICENSE for more information.

Back To Top

Owner

  • Name: BMI "Mario Stefanelli" Lab - UNIPV
  • Login: bmi-labmedinfo
  • Kind: organization
  • Email: labmedinfo@unipv.it
  • Location: Italy

Repository for BMI lab code and sw products

GitHub Events

Total
  • Watch event: 1
  • Push event: 16
Last Year
  • Watch event: 1
  • Push event: 16

Dependencies

pyproject.toml pypi
requirements.txt pypi
  • keras *
  • matplotlib *
  • numpy *
  • opencv-python *
  • tensorflow *
  • torch *
setup.py pypi
  • keras *
  • matplotlib *
  • numpy *
  • opencv-python *
  • tensorflow *
  • torch *