relative-absolute-magnitude-propagation

Explain the outputs of your Vision Transformers, Residual Networks and classic CNNs with absLRP and evaluate the explanations over multiple criteria using Global Attribution Evaluation.

https://github.com/davor10105/relative-absolute-magnitude-propagation

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 5 DOI reference(s) in README
  • Academic publication links
    Links to: acm.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (8.8%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

Explain the outputs of your Vision Transformers, Residual Networks and classic CNNs with absLRP and evaluate the explanations over multiple criteria using Global Attribution Evaluation.

Basic Info
  • Host: GitHub
  • Owner: davor10105
  • Language: Python
  • Default Branch: main
  • Size: 13.8 MB
Statistics
  • Stars: 4
  • Watchers: 1
  • Forks: 1
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed about 1 year ago
Metadata Files
Readme Citation

README.md

abslrp_logo

Advancing Attribution-Based Explainability through Multi-Component Evaluation and Relative Absolute Magnitude Propagation

🤖 Visualize Vision Transformer attribution maps and more!

This repository contains the source code for the new Absolute Magnitude Layer-Wise Relevance Propagation attribution method and the Global Evaluation Metric described in the paper https://dl.acm.org/doi/10.1145/3649458 .

🔎 Absolute Magnitude Layer-Wise Relevance Propagation

A novel Layer-Wise Propagation rule, referred to as Absolute Magnitude Layer-Wise Relevance Propagation (absLRP). This rule effectively addresses the issue of incorrect relative attribution between neurons within the same layer that exhibit varying absolute magnitude activations. We apply this rule to three different architectures, including the very recent Vision Transformer.

Alt text Figure 1. absLRP visualizations for Vision Transformer architecture - PascalVOC

Alt text Figure 2. absLRP visualizations for VGG architecture - ImageNet

🔬 Usage

Import required modules ```python import torch from abslrp_gae.abslrp.rules.models import VGGAbsLRPRule, ResNetAbsLRPRule, VisionTransformerAbsLRPRule from abslrp_gae.abslrp.relevancy_methods import AbsLRPRelevancyMethod from abslrp_gae.utils import preprocess_image, visualize_batch import timm from timm.models.vision_transformer import VisionTransformer from PIL import Image ```

Load a model from timm and apply the absLRP rule: ```python

load the model

device = "cuda" model = timm.createmodel("vitbasepatch16224", pretrained=True)

model = timm.create_model("vgg16", pretrained=True)

model = timm.create_model("resnet50", pretrained=True)

model.eval() model.to(device)

apply the absLRP rule to the model

VisionTransformerAbsLRPRule().apply(model)

VGGAbsLRPRule().apply(model)

ResNetAbsLRPRule().apply(model)

Load inference images and preprocess: python isvit = isinstance(model, VisionTransformer) x = torch.stack( [ preprocessimage(Image.open("images/dogcat.jpeg"), isvit), preprocessimage(Image.open("images/hedgehog.jpg"), isvit), ] ) Calculate contrastive relevance using absLRP and visualize: python relevancymethod = AbsLRPRelevancyMethod(model, device) relevance = relevancymethod.relevancy(x) visualizebatch(x, relevance, isvit) ```

Usage absLRP example output

📊 Global Evaluation Metric

A new evaluation method, Global Attribution Evaluation (GAE), which offers a novel perspective on evaluating faithfulness and robustness of an attribution method by utilizing gradient-based masking, while combining those results with a localization method to achieve a comprehensive evaluation of explanation quality in a single score.

Alt text Figure 3. Top and bottom 5 scoring images on GAE metric out of a randomly sampled 1024 images - absLRP VGG ImageNet

🔬 Usage

Import the required libraries python from abslrp_gae.gae.gae import GlobalEvaluationMetric Define a dictionary of relevancy methods: python relevancy_methods = { "abslrp": relevancy_method, } Run the metric python metric = GlobalEvaluationMetric() metric.run( relevancy_methods=relevancy_methods, model=base_model, # original model dataset=dataset, batch_size=16, ) Plot the results python metric.plot()

📞 Contact

LinkedIn

🔗 Citation

Please use the following BibText entry to cite our work: bibtex @article{10.1145/3649458, author = {Vukadin, Davor and Afri\'{c}, Petar and \v{S}ili\'{c}, Marin and Dela\v{c}, Goran}, title = {Advancing Attribution-Based Neural Network Explainability through Relative Absolute Magnitude Layer-Wise Relevance Propagation and Multi-Component Evaluation}, year = {2024}, issue_date = {June 2024}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {15}, number = {3}, issn = {2157-6904}, url = {https://doi.org/10.1145/3649458}, doi = {10.1145/3649458}, journal = {ACM Trans. Intell. Syst. Technol.}, month = {apr}, articleno = {47}, numpages = {30}, keywords = {Explainable artificial intelligence, Vision Transformer, layer-wise relevance propagation, attribution-based evaluation} }

Owner

  • Login: davor10105
  • Kind: user

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: >-
  Advancing Attribution-Based Explainability through
  Multi-Component Evaluation and Relative Absolute Magnitude
  Propagation
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Davor Vukadin
    email: davor.vukadin@fer.hr
    affiliation: >-
      Faculty of Electrical Engineering and Computing,
      University of Zagreb
    orcid: 'https://orcid.org/0000-0003-3309-6718'
identifiers:
  - type: doi
    value: 10.1145/3649458
repository-code: >-
  https://github.com/davor10105/relative-absolute-magnitude-propagation
abstract: >-
  A novel Layer-Wise Propagation rule, referred to as
  Relative Absolute Magnitude Propagation (RAMP). This rule
  effectively addresses the issue of incorrect relative
  attribution between neurons within the same layer that
  exhibit varying absolute magnitude activations. We apply
  this rule to three different, including the very recent
  Vision Transformer.

  A new evaluation method, Global Attribution Evaluation
  (GAE), which offers a novel perspective on evaluating
  faithfulness and robustness of an attribution method by
  utilizing gradient-based masking, while combining those
  results with a localization method to achieve a
  comprehensive evaluation of explanation quality in a
  single score.
keywords:
  - xai
  - transformer
  - evaluation metric
license: Apache-2.0
date-released: '2024-04-15'

GitHub Events

Total
  • Watch event: 3
  • Push event: 23
  • Create event: 1
Last Year
  • Watch event: 3
  • Push event: 23
  • Create event: 1