alibi

Algorithms for explaining machine learning models

https://github.com/seldonio/alibi

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, springer.com, nature.com
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (14.5%) to scientific vocabulary

Keywords

counterfactual explanations interpretability machine-learning xai

Keywords from Contributors

data-drift adversarial anomaly concept-drift drift-detection images outlier semi-supervised-learning tabular-data unsupervised-learning
Last synced: 6 months ago · JSON representation ·

Repository

Algorithms for explaining machine learning models

Basic Info
Statistics
  • Stars: 2,551
  • Watchers: 53
  • Forks: 260
  • Open Issues: 155
  • Releases: 33
Topics
counterfactual explanations interpretability machine-learning xai
Created almost 7 years ago · Last pushed 6 months ago
Metadata Files
Readme Changelog Contributing License Citation

README.md

Alibi Logo

Build Status Documentation Status codecov PyPI - Python Version PyPI - Package Version Conda (channel only) GitHub - License Slack channel


Alibi is a source-available Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. * Documentation

If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.


Anchor explanations for images


Integrated Gradients for text


Counterfactual examples


Accumulated Local Effects

Table of Contents

Installation and Usage

Alibi can be installed from:

  • PyPI or GitHub source (with pip)
  • Anaconda (with conda/mamba)

With pip

  • Alibi can be installed from PyPI:

bash pip install alibi

  • Alternatively, the development version can be installed: bash pip install git+https://github.com/SeldonIO/alibi.git

  • To take advantage of distributed computation of explanations, install alibi with ray: bash pip install alibi[ray]

  • For SHAP support, install alibi as follows: bash pip install alibi[shap]

With conda

To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:

bash conda install mamba -n base -c conda-forge

  • For the standard Alibi install: bash mamba install -c conda-forge alibi

  • For distributed computing support: bash mamba install -c conda-forge alibi ray

  • For SHAP support: bash mamba install -c conda-forge alibi shap

Usage

The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize, fit and explain steps. We will use the AnchorTabular explainer to illustrate the API:

```python from alibi.explainers import AnchorTabular

initialize and fit explainer by passing a prediction function and any other required arguments

explainer = AnchorTabular(predictfn, featurenames=featurenames, categorymap=categorymap) explainer.fit(Xtrain)

explain an instance

explanation = explainer.explain(x) ```

The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary containing the explainer metadata and any hyperparameters and data is a dictionary containing everything related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies from method to method so we encourage the reader to become familiar with the types of methods supported.

Supported Methods

The following tables summarize the possible use cases for each method.

Model Explanations

| Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed | |:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:| | ALE | BB | global | ✔ | ✔ | ✔ | | | | | | | Partial Dependence | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | | | PD Variance | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | | | Permutation Importance | BB | global | ✔ | ✔ | ✔ | | | ✔ | | | | Anchors | BB | local | ✔ | | ✔ | ✔ | ✔ | ✔ | For Tabular | | | CEM | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | Optional | | | Counterfactuals | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | No | | | Prototype Counterfactuals | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | ✔ | Optional | | | Counterfactuals with RL | BB | local | ✔ | | ✔ | | ✔ | ✔ | ✔ | | | Integrated Gradients | TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | Optional | | | Kernel SHAP | BB | local

global | ✔ | ✔ | ✔ | | | ✔ | ✔ | ✔ | | Tree SHAP | WB | local

global | ✔ | ✔ | ✔ | | | ✔ | Optional | | | Similarity explanations | WB | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |

Model Confidence

These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.

|Method|Models|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set required| |:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---| |Trust Scores|BB|✔| |✔|✔(1)|✔(2)| |Yes| |Linearity Measure|BB|✔|✔|✔| |✔| |Optional|

Key: - BB - black-box (only require a prediction function) - BB* - black-box but assume model is differentiable - WB - requires white-box model access. There may be limitations on models supported - TF/Keras - TensorFlow models via the Keras API - Local - instance specific explanation, why was this prediction made? - Global - explains the model with respect to a set of instances - (1) - depending on model - (2) - may require dimensionality reduction

Prototypes

These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.

|Method|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set labels| |:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------| |ProtoSelect|✔| |✔|✔|✔|✔| Optional |

References and Examples

Citations

If you use alibi in your research, please consider citing it.

BibTeX entry:

@article{JMLR:v22:21-0017, author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca}, title = {Alibi Explain: Algorithms for Explaining Machine Learning Models}, journal = {Journal of Machine Learning Research}, year = {2021}, volume = {22}, number = {181}, pages = {1-7}, url = {http://jmlr.org/papers/v22/21-0017.html} }

Owner

  • Name: Seldon
  • Login: SeldonIO
  • Kind: organization
  • Email: hello@seldon.io
  • Location: London / Cambridge

Machine Learning Deployment for Kubernetes

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Klaise"
  given-names: "Janis"
  orcid: "https://orcid.org/0000-0002-7774-8047"
- family-names: "Van Looveren"
  given-names: "Arnaud"
  orcid: "https://orcid.org/0000-0002-8347-5305"
- family-names: "Vacanti"
  given-names: "Giovanni"
- family-names: "Coca"
  given-names: "Alexandru"
- family-names: "Samoilescu"
  given-names: "Robert"
- family-names: "Scillitoe"
  given-names: "Ashley"
  orcid: "https://orcid.org/0000-0001-8971-7224"
- family-names: "Athorne"
  given-names: "Alex"
title: "Alibi Explain: Algorithms for Explaining Machine Learning Models"
version: 0.9.6
date-released: 2024-04-18
url: "https://github.com/SeldonIO/alibi"
preferred-citation:
  type: article
  authors:
  - family-names: "Klaise"
    given-names: "Janis"
    orcid: "https://orcid.org/0000-0002-7774-8047"
  - family-names: "Van Looveren"
    given-names: "Arnaud"
    orcid: "https://orcid.org/0000-0002-8347-5305"
  - family-names: "Vacanti"
    given-names: "Giovanni"
  - family-names: "Coca"
    given-names: "Alexandru"
  journal: "Journal of Machine Learning Research"
  month: 6
  start: 1 # First page number
  end: 7 # Last page number
  title: "Alibi Explain: Algorithms for Explaining Machine Learning Models"
  issue: 181
  volume: 22
  year: 2021
  url: http://jmlr.org/papers/v22/21-0017.html

GitHub Events

Total
  • Issues event: 5
  • Watch event: 154
  • Delete event: 16
  • Issue comment event: 45
  • Push event: 19
  • Pull request review event: 12
  • Pull request review comment event: 11
  • Pull request event: 40
  • Fork event: 11
  • Create event: 14
Last Year
  • Issues event: 5
  • Watch event: 154
  • Delete event: 16
  • Issue comment event: 45
  • Push event: 19
  • Pull request review event: 12
  • Pull request review comment event: 11
  • Pull request event: 40
  • Fork event: 11
  • Create event: 14

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 581
  • Total Committers: 22
  • Avg Commits per committer: 26.409
  • Development Distribution Score (DDS): 0.532
Past Year
  • Commits: 191
  • Committers: 8
  • Avg Commits per committer: 23.875
  • Development Distribution Score (DDS): 0.749
Top Committers
Name Email Commits
Janis Klaise jk@s****o 272
Janis Klaise j****e@g****m 61
RobertSamoilescu r****u@g****m 55
dependabot[bot] 4****]@u****m 46
Ashley Scillitoe a****e@s****o 39
mauicv a****e@s****o 36
alexcoca a****3@y****k 18
arnaudvl a****l@s****o 16
giovac73 g****s@g****m 14
Ashley Scillitoe a****e@g****m 10
Marco Gorelli 3****i@u****m 3
Alex Housley ah@s****o 1
Christopher Samiullah C****S@u****m 1
Adrian Gonzalez-Martin a****m@s****o 1
Sanja Simonovikj s****s@y****m 1
James Budarz j****z@g****m 1
abs428 2****8@u****m 1
Vincent Xie v****h@g****m 1
oscarfco 5****o@u****m 1
mauicv a****e@g****m 1
Marco Gorelli m****i@g****m 1
David de la Iglesia Castro d****o@g****m 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 9 months ago

All Time
  • Total issues: 51
  • Total pull requests: 179
  • Average time to close issues: 2 months
  • Average time to close pull requests: 2 months
  • Total issue authors: 25
  • Total pull request authors: 13
  • Average comments per issue: 1.53
  • Average comments per pull request: 2.03
  • Merged pull requests: 119
  • Bot issues: 0
  • Bot pull requests: 78
Past Year
  • Issues: 4
  • Pull requests: 30
  • Average time to close issues: N/A
  • Average time to close pull requests: 3 months
  • Issue authors: 4
  • Pull request authors: 3
  • Average comments per issue: 0.5
  • Average comments per pull request: 1.47
  • Merged pull requests: 7
  • Bot issues: 0
  • Bot pull requests: 22
Top Authors
Issue Authors
  • jklaise (12)
  • RobertSamoilescu (7)
  • ascillitoe (6)
  • owl0695 (3)
  • mauicv (2)
  • Himanshu-1988 (2)
  • LakshmanKishore (1)
  • HevOHel (1)
  • zlds123 (1)
  • CodeSmileBot (1)
  • fraseralex96 (1)
  • hyejinhahihong (1)
  • shrija2901 (1)
  • PoplarTN (1)
  • pranavn91 (1)
Pull Request Authors
  • dependabot[bot] (112)
  • jklaise (49)
  • RobertSamoilescu (24)
  • mauicv (14)
  • jesse-c (7)
  • ascillitoe (6)
  • Rajakavitha1 (4)
  • majolo (2)
  • tanaysd (1)
  • LakshmanKishore (1)
  • paulb-seldon (1)
  • KGKallasmaa (1)
  • badcount (1)
Top Labels
Issue Labels
Type: Bug (7) TreeShap (4) Type: Question (3) Priority: High (3) KernelShap (2) Type: Maintenance (2) Priority: Low (2) Type: Docs (2) Type: Enhancement (2) Good first issue (1) Type: Serialization (1) upstream (1) Effort: S (1) Type: Testing (1) Type: Method extension (1) Engineering (1) Priority: Medium (1) internal-mle (1)
Pull Request Labels
dependencies (111) python (26) WIP (7) DO NOT MERGE (5) Type: Bug (1) hacktoberfest-accepted (1) github_actions (1)

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 13,482 last-month
  • Total docker downloads: 6,197
  • Total dependent packages: 8
    (may contain duplicates)
  • Total dependent repositories: 116
    (may contain duplicates)
  • Total versions: 38
  • Total maintainers: 4
pypi.org: alibi

Algorithms for monitoring and explaining machine learning models

  • Versions: 34
  • Dependent Packages: 8
  • Dependent Repositories: 116
  • Downloads: 13,482 Last month
  • Docker Downloads: 6,197
Rankings
Dependent packages count: 1.1%
Docker downloads count: 1.4%
Dependent repos count: 1.4%
Stargazers count: 1.5%
Average: 1.8%
Downloads: 2.2%
Forks count: 3.2%
Last synced: 6 months ago
conda-forge.org: alibi

[Alibi](https://docs.seldon.io/projects/alibi) is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. - [Documentation](https://docs.seldon.io/projects/alibi/en/latest/) If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project [alibi-detect](https://github.com/SeldonIO/alibi-detect). PyPI: [https://pypi.org/project/alibi/](https://pypi.org/project/alibi/)

  • Versions: 4
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Stargazers count: 8.4%
Forks count: 11.3%
Average: 26.2%
Dependent repos count: 34.0%
Dependent packages count: 51.2%
Last synced: 6 months ago

Dependencies

.github/workflows/ci.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • codecov/codecov-action v3 composite
  • mxschmitt/action-tmate v3 composite
.github/workflows/test_all_notebooks.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
.github/workflows/test_changed_notebooks.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • tj-actions/changed-files v1.1.2 composite
requirements/dev.txt pypi
  • catboost >=1.0.0,<2.0.0 development
  • flake8 >=3.7.7,<7.0.0 development
  • ipykernel >=5.1.0,<7.0.0 development
  • jupytext >=1.12.0,<2.0.0 development
  • mypy >=1.0,<2.0 development
  • nbconvert >=6.0.7,<8.0.0 development
  • pre-commit >=1.20.0,<4.0.0 development
  • pytest >=5.3.5,<8.0.0 development
  • pytest-cov >=2.6.1,<5.0.0 development
  • pytest-custom_exit_code >=0.3.0 development
  • pytest-lazy-fixture >=0.6.3,<0.7.0 development
  • pytest-mock >=3.10.0,<4.0.0 development
  • pytest-timeout >=1.4.2,<3.0.0 development
  • pytest-xdist >=1.28.0,<4.0.0 development
  • torch >=1.9.0,<3.0.0 development
  • tox >=3.21.0,<5.0.0 development
  • twine >3.2.0,<5.0.0 development
  • types-requests >=2.25.0,<3.0.0 development
requirements/docs.txt pypi
  • ipykernel >=5.1.0,<7.0.0
  • ipython >=7.2.0,<9.0.0
  • myst-parser >=1.0,<3.0
  • nbsphinx >=0.8.5,<0.10.0
  • sphinx >=4.2.0,<8.0.0
  • sphinx-rtd-theme >=1.0.0,<2.0.0
  • sphinx_design ==0.5.0
  • sphinxcontrib-apidoc >=0.3.0,<0.5.0
  • typing-extensions >=3.7.4.3
setup.py pypi
  • numpy >=1.16.2,
  • pandas >=1.0.0,
  • scikit-learn >=1.0.0,
  • spacy *
testing/requirements.txt pypi
  • ipywidgets >=7.6 test
  • seaborn >=0.9.0 test
  • xgboost >=0.90 test
.github/workflows/security.yaml actions
  • actions/checkout v4 composite
  • actions/checkout v3 composite
  • actions/setup-python v5 composite
  • snyk/actions/python-3.10 master composite