Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.3%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: davor10105
  • Language: Python
  • Default Branch: master
  • Size: 16.5 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed 8 months ago
Metadata Files
Readme Citation

README.md

harmony logo

Evaluating Harmony: Neural Network Explanation Metrics and Human Perception

This repository contains the source code for a new method of evaluating the alignment between human perception and evaluation metrics for attribution-based methods.

Model fine-tuning and data preparation

The first step of evaluating an evaluation metric is to fine-tune a model on the metric. You can define your backprop-enabled metric by extending the metric.LearnableMetric class and supplying the appropriate trainer located in the trainer module, or writing your own by extending the trainer.LearnableMetricTrainer class. After fine-tuning, supply your model to the create_harmony_dataset method in the harmony_dataset module to automatically create a dataset which can be used within the supplied web aplication in this repository.

GradCAM RMA Fine-tuned | Guided-GradCAM RMA Fine-tuned :-------------------------:|:-------------------------: alt text | alt text Figure 1. GradCAM attributions from localization experiments. The first row corresponds to the Focus experiment, while the second row pertains to the RMA experiment. Optimizing localization metrics improves the quality of the attribution maps. | Figure 2. GuidedBackprop attributions from localization experiments. The first row corresponds to the Focus experiment, while the second row pertains to the RMA experiment. Optimizing localization metrics further sparsifies the resulting attributions, rendering them less interpretable.

HARMONY web application

Once you create the dataset, you can load it into the web application using Django fixtures. Then, simply run the application, add the annotators to the user database and label the previously created examples. The labels are saved to a database, which can subsequently be analyzed to report the alignment of an evaluation metric to the human perception.

alt text Figure 3. An example from our user study. The original image and the model's prediction is presented in the first row. In the following row, we visualize the original model's attribution map for the target class, the fine-tuned model's map, and the option "Neither".

Owner

  • Login: davor10105
  • Kind: user

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: >-
  Evaluating Harmony: Neural Network Explanation Metrics and
  Human Perception
message: >-
  If you use this software, please cite it using the
  metadata from this file.
type: software
authors:
  - given-names: Davor Vukadin
    email: davor.vukadin@fer.hr
    affiliation: >-
      Faculty of Electrical Engineering and Computing,
      University of Zagreb
    orcid: 'https://orcid.org/0000-0003-3309-6718'
repository-code: 'https://github.com/davor10105/harmony_app'
abstract: >-
  This repository contains the source code for a new method
  of evaluating the alignment between human perception and
  evaluation metrics for attribution-based methods. It also
  provides expert annotations for assessing the quality of
  attribution methods.
keywords:
  - xai
  - evaluation metric
  - alignment
  - dataset
  - human perception
license: Apache-2.0
date-released: '2024-05-23'

GitHub Events

Total
  • Push event: 2
Last Year
  • Push event: 2

Dependencies

requirements_finetune.txt pypi
  • captum *
  • matplotlib *
  • numpy *
  • opencv-python *
  • scipy *
  • torch *
  • torchaudio *
  • torchvision *
  • tqdm *
  • zennit *
requirements_webapp.txt pypi
  • Pillow ==10.1.0
  • django ==5.0