ExplainableAI

Explainable AI in Julia.

https://github.com/julia-xai/explainableai.jl

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary

Keywords

attribution-methods explainable-ai feature-attribution interpretability interpretable-ai julia lrp xai

Keywords from Contributors

interactive numerics matrix-exponential projections meshing pde network-simulation hacking embedded optim
Last synced: 6 months ago · JSON representation ·

Repository

Explainable AI in Julia.

Basic Info
  • Host: GitHub
  • Owner: Julia-XAI
  • License: mit
  • Language: Julia
  • Default Branch: main
  • Homepage:
  • Size: 41.2 MB
Statistics
  • Stars: 111
  • Watchers: 4
  • Forks: 3
  • Open Issues: 2
  • Releases: 29
Topics
attribution-methods explainable-ai feature-attribution interpretability interpretable-ai julia lrp xai
Created about 5 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog License Citation

README.md

ExplainableAI.jl


| | | |:--------------|:----------------------------------------------------------------------------------------------------------| | Documentation | | | Build Status | | | Testing | Aqua JET | | Code Style | Code Style: Blue ColPrac | | Citation | |

Explainable AI in Julia.

This package implements interpretability methods for black-box classifiers, with an emphasis on local explanations and attribution maps in input space. The only requirement for the model is that it is differentiable[^1]. It is similar to Captum and Zennit for PyTorch and iNNvestigate for Keras models.

[^1]: The automatic differentiation backend can be selected using ADTypes.jl.

Installation

This package supports Julia ≥1.10. To install it, open the Julia REPL and run julia-repl julia> ]add ExplainableAI

Example

Let's explain why an image of a castle is classified as such by a vision model:

```julia using ExplainableAI using VisionHeatmaps # visualization of explanations as heatmaps using Zygote # load autodiff backend for gradient-based methods using Flux, Metalhead # pre-trained vision models in Flux using DataAugmentation # input preprocessing using HTTP, FileIO, ImageIO # load image from URL using ImageInTerminal # show heatmap in terminal

Load & prepare model

model = VGG(16, pretrain=true)

Load input

url = HTTP.URI("https://raw.githubusercontent.com/Julia-XAI/ExplainableAI.jl/gh-pages/assets/heatmaps/castle.jpg") img = load(url)

Preprocess input

mean = (0.485f0, 0.456f0, 0.406f0) std = (0.229f0, 0.224f0, 0.225f0) tfm = CenterResizeCrop((224, 224)) |> ImageToTensor() |> Normalize(mean, std) input = apply(tfm, Image(img)) # apply DataAugmentation transform input = reshape(input.data, 224, 224, 3, :) # unpack data and add batch dimension

Run XAI method

analyzer = SmoothGrad(model) expl = analyze(input, analyzer) # or: expl = analyzer(input) heatmap(expl) # show heatmap using VisionHeatmaps.jl ```

By default, explanations are computed for the class with the highest activation. We can also compute explanations for a specific class, e.g. the one at output index 5:

julia analyze(input, analyzer, 5) # for explanation heatmap(input, analyzer, 5) # for heatmap

| Analyzer | Heatmap for class "castle" |Heatmap for class "street sign" | |:--------------------------------------------- |:------------------------------:|:----------------------------------:| | Gradient | | | | SmoothGrad | | | | IntegratedGradients | | | | InputTimesGradient | | |

[!TIP] The heatmaps shown above were created using a VGG-16 vision model from Metalhead.jl that was pre-trained on the ImageNet dataset.

Since ExplainableAI.jl can be used outside of Deep Learning models and Flux.jl, we have omitted specific models and inputs from the code snippet above. The full code used to generate the heatmaps can be found here.

Depending on the method, the applied heatmapping defaults differ: sensitivity-based methods (e.g. Gradient) default to a grayscale color scheme, whereas attribution-based methods (e.g. InputTimesGradient) default to a red-white-blue color scheme. Red color indicates regions of positive relevance towards the selected class, whereas regions in blue are of negative relevance. More information on heatmapping presets can be found in the Julia-XAI documentation.

[!WARNING] ExplainableAI.jl used to contain Layer-wise Relevance Propagation (LRP). Since version v0.7.0, LRP is now available as part of a separate package in the Julia-XAI ecosystem, called RelevancePropagation.jl.

| Analyzer | Heatmap for class "castle" |Heatmap for class "street sign" | |:--------------------------------------------- |:------------------------------:|:----------------------------------:| | LRP with EpsilonPlus composite | | | | LRP with EpsilonPlusFlat composite | | | | LRP with EpsilonAlpha2Beta1 composite | | | | LRP with EpsilonAlpha2Beta1Flat composite | | | | LRP with EpsilonGammaBox composite | | | | LRP with ZeroRule (discouraged) | | |

Video Demonstration

Check out our talk at JuliaCon 2022 for a demonstration of the package.

Methods

Currently, the following analyzers are implemented:

  • Gradient
  • InputTimesGradient
  • SmoothGrad
  • IntegratedGradients
  • GradCAM

One of the design goals of the Julia-XAI ecosystem is extensibility. To implement an XAI method, take a look at the common interface defined in XAIBase.jl.

Roadmap

In the future, we would like to include: - PatternNet - DeepLift - LIME - Shapley values via ShapML.jl

Contributions are welcome!

Acknowledgements

Adrian Hill acknowledges support by the Federal Ministry of Education and Research (BMBF) for the Berlin Institute for the Foundations of Learning and Data (BIFOLD) (01IS18037A).

Owner

  • Name: Julia Explainable AI
  • Login: Julia-XAI
  • Kind: organization

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: "Hill"
    given-names: "Adrian"
    orcid: "https://orcid.org/0009-0009-5977-301X"
title: "ExplainableAI.jl"
version: 0.1.0
date-released: 2021-10-30
url: "https://github.com/Julia-XAI/ExplainableAI.jl"

GitHub Events

Total
  • Create event: 9
  • Commit comment event: 8
  • Release event: 3
  • Watch event: 8
  • Delete event: 6
  • Issue comment event: 7
  • Push event: 39
  • Pull request event: 14
  • Fork event: 1
Last Year
  • Create event: 9
  • Commit comment event: 8
  • Release event: 3
  • Watch event: 8
  • Delete event: 6
  • Issue comment event: 7
  • Push event: 39
  • Pull request event: 14
  • Fork event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 241
  • Total Committers: 4
  • Avg Commits per committer: 60.25
  • Development Distribution Score (DDS): 0.037
Past Year
  • Commits: 27
  • Committers: 2
  • Avg Commits per committer: 13.5
  • Development Distribution Score (DDS): 0.037
Top Committers
Name Email Commits
Adrian Hill a****l@m****g 232
dependabot[bot] 4****] 6
CompatHelper Julia c****y@j****g 2
Janes Sanne 5****s 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 39
  • Total pull requests: 69
  • Average time to close issues: 5 months
  • Average time to close pull requests: about 11 hours
  • Total issue authors: 3
  • Total pull request authors: 3
  • Average comments per issue: 1.51
  • Average comments per pull request: 1.09
  • Merged pull requests: 65
  • Bot issues: 0
  • Bot pull requests: 4
Past Year
  • Issues: 1
  • Pull requests: 7
  • Average time to close issues: N/A
  • Average time to close pull requests: about 1 hour
  • Issue authors: 1
  • Pull request authors: 2
  • Average comments per issue: 12.0
  • Average comments per pull request: 0.43
  • Merged pull requests: 7
  • Bot issues: 0
  • Bot pull requests: 1
Top Authors
Issue Authors
  • adrhill (14)
  • ceferisbarov (2)
  • JuliaTagBot (1)
Pull Request Authors
  • adrhill (28)
  • dependabot[bot] (5)
  • JeanAnNess (1)
Top Labels
Issue Labels
documentation (4) planning (2) tests (2) enhancement (2) bug (1) package (1)
Pull Request Labels
run benchmark (5) dependencies (5) BREAKING (3) enhancement (2) bug (2) github_actions (1)

Packages

  • Total packages: 1
  • Total downloads:
    • julia 5 total
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 25
juliahub.com: ExplainableAI

Explainable AI in Julia.

  • Versions: 25
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 5 Total
Rankings
Stargazers count: 8.9%
Dependent repos count: 9.9%
Average: 27.8%
Dependent packages count: 38.9%
Forks count: 53.5%
Last synced: 6 months ago

Dependencies

.github/workflows/Benchmark.yml actions
  • actions/cache v1 composite
  • actions/checkout v2 composite
  • julia-actions/setup-julia latest composite
.github/workflows/TagBot.yml actions
  • JuliaRegistries/TagBot v1 composite
.github/workflows/ci.yml actions
  • actions/cache v1 composite
  • actions/checkout v2 composite
  • codecov/codecov-action v1 composite
  • julia-actions/julia-buildpkg v1 composite
  • julia-actions/julia-processcoverage v1 composite
  • julia-actions/julia-runtest v1 composite
  • julia-actions/setup-julia v1 composite