Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 2 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, springer.com, nature.com -
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.5%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Algorithms for explaining machine learning models
Basic Info
- Host: GitHub
- Owner: SeldonIO
- License: other
- Language: Python
- Default Branch: master
- Homepage: https://docs.seldon.io/projects/alibi/en/stable/
- Size: 38.5 MB
Statistics
- Stars: 2,551
- Watchers: 53
- Forks: 260
- Open Issues: 155
- Releases: 33
Topics
Metadata Files
README.md
Alibi is a source-available Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. * Documentation
If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project alibi-detect.
|
Anchor explanations for images
|
Integrated Gradients for text
|
|
Counterfactual examples
|
Accumulated Local Effects
|
Table of Contents
Installation and Usage
Alibi can be installed from:
- PyPI or GitHub source (with
pip) - Anaconda (with
conda/mamba)
With pip
- Alibi can be installed from PyPI:
bash
pip install alibi
Alternatively, the development version can be installed:
bash pip install git+https://github.com/SeldonIO/alibi.gitTo take advantage of distributed computation of explanations, install
alibiwithray:bash pip install alibi[ray]For SHAP support, install
alibias follows:bash pip install alibi[shap]
With conda
To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:
bash
conda install mamba -n base -c conda-forge
For the standard Alibi install:
bash mamba install -c conda-forge alibiFor distributed computing support:
bash mamba install -c conda-forge alibi rayFor SHAP support:
bash mamba install -c conda-forge alibi shap
Usage
The alibi explanation API takes inspiration from scikit-learn, consisting of distinct initialize,
fit and explain steps. We will use the AnchorTabular
explainer to illustrate the API:
```python from alibi.explainers import AnchorTabular
initialize and fit explainer by passing a prediction function and any other required arguments
explainer = AnchorTabular(predictfn, featurenames=featurenames, categorymap=categorymap) explainer.fit(Xtrain)
explain an instance
explanation = explainer.explain(x) ```
The explanation returned is an Explanation object with attributes meta and data. meta is a dictionary
containing the explainer metadata and any hyperparameters and data is a dictionary containing everything
related to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed
via explanation.data['anchor'] (or explanation.anchor). The exact details of available fields varies
from method to method so we encourage the reader to become familiar with the
types of methods supported.
Supported Methods
The following tables summarize the possible use cases for each method.
Model Explanations
| Method | Models | Explanations | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |
|:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:|
| ALE | BB | global | ✔ | ✔ | ✔ | | | | | |
| Partial Dependence | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | |
| PD Variance | BB WB | global | ✔ | ✔ | ✔ | | | ✔ | | |
| Permutation Importance | BB | global | ✔ | ✔ | ✔ | | | ✔ | | |
| Anchors | BB | local | ✔ | | ✔ | ✔ | ✔ | ✔ | For Tabular | |
| CEM | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | Optional | |
| Counterfactuals | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | | No | |
| Prototype Counterfactuals | BB* TF/Keras | local | ✔ | | ✔ | | ✔ | ✔ | Optional | |
| Counterfactuals with RL | BB | local | ✔ | | ✔ | | ✔ | ✔ | ✔ | |
| Integrated Gradients | TF/Keras | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | Optional | |
| Kernel SHAP | BB | local
global | ✔ | ✔ | ✔ | | | ✔ | ✔ | ✔ |
| Tree SHAP | WB | local
global | ✔ | ✔ | ✔ | | | ✔ | Optional | |
| Similarity explanations | WB | local | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
Model Confidence
These algorithms provide instance-specific scores measuring the model confidence for making a particular prediction.
|Method|Models|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set required| |:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---| |Trust Scores|BB|✔| |✔|✔(1)|✔(2)| |Yes| |Linearity Measure|BB|✔|✔|✔| |✔| |Optional|
Key: - BB - black-box (only require a prediction function) - BB* - black-box but assume model is differentiable - WB - requires white-box model access. There may be limitations on models supported - TF/Keras - TensorFlow models via the Keras API - Local - instance specific explanation, why was this prediction made? - Global - explains the model with respect to a set of instances - (1) - depending on model - (2) - may require dimensionality reduction
Prototypes
These algorithms provide a distilled view of the dataset and help construct a 1-KNN interpretable classifier.
|Method|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set labels| |:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------| |ProtoSelect|✔| |✔|✔|✔|✔| Optional |
References and Examples
Accumulated Local Effects (ALE, Apley and Zhu, 2016)
- Documentation
- Examples: California housing dataset, Iris dataset
Partial Dependence (J.H. Friedman, 2001)
- Documentation
- Examples: Bike rental
Partial Dependence Variance(Greenwell et al., 2018)
- Documentation
- Examples: Friedman’s regression problem
Permutation Importance(Breiman, 2001; Fisher et al., 2018)
- Documentation
- Examples: Who's Going to Leave Next?
Anchor explanations (Ribeiro et al., 2018)
Contrastive Explanation Method (CEM, Dhurandhar et al., 2018)
- Documentation
- Examples: MNIST, Iris dataset
Counterfactual Explanations (extension of Wachter et al., 2017)
- Documentation
- Examples: MNIST
Counterfactual Explanations Guided by Prototypes (Van Looveren and Klaise, 2019)
Model-agnostic Counterfactual Explanations via RL(Samoilescu et al., 2021)
- Documentation
- Examples: MNIST, Adult income
Integrated Gradients (Sundararajan et al., 2017)
- Documentation,
- Examples: MNIST example, Imagenet example, IMDB example.
Kernel Shapley Additive Explanations (Lundberg et al., 2017)
Tree Shapley Additive Explanations (Lundberg et al., 2020)
Trust Scores (Jiang et al., 2018)
- Documentation
- Examples: MNIST, Iris dataset
Linearity Measure
- Documentation
- Examples: Iris dataset, fashion MNIST
ProtoSelect
- Documentation
- Examples: Adult Census & CIFAR10
Similarity explanations
Citations
If you use alibi in your research, please consider citing it.
BibTeX entry:
@article{JMLR:v22:21-0017,
author = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},
title = {Alibi Explain: Algorithms for Explaining Machine Learning Models},
journal = {Journal of Machine Learning Research},
year = {2021},
volume = {22},
number = {181},
pages = {1-7},
url = {http://jmlr.org/papers/v22/21-0017.html}
}
Owner
- Name: Seldon
- Login: SeldonIO
- Kind: organization
- Email: hello@seldon.io
- Location: London / Cambridge
- Website: https://seldon.io
- Repositories: 40
- Profile: https://github.com/SeldonIO
Machine Learning Deployment for Kubernetes
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Klaise"
given-names: "Janis"
orcid: "https://orcid.org/0000-0002-7774-8047"
- family-names: "Van Looveren"
given-names: "Arnaud"
orcid: "https://orcid.org/0000-0002-8347-5305"
- family-names: "Vacanti"
given-names: "Giovanni"
- family-names: "Coca"
given-names: "Alexandru"
- family-names: "Samoilescu"
given-names: "Robert"
- family-names: "Scillitoe"
given-names: "Ashley"
orcid: "https://orcid.org/0000-0001-8971-7224"
- family-names: "Athorne"
given-names: "Alex"
title: "Alibi Explain: Algorithms for Explaining Machine Learning Models"
version: 0.9.6
date-released: 2024-04-18
url: "https://github.com/SeldonIO/alibi"
preferred-citation:
type: article
authors:
- family-names: "Klaise"
given-names: "Janis"
orcid: "https://orcid.org/0000-0002-7774-8047"
- family-names: "Van Looveren"
given-names: "Arnaud"
orcid: "https://orcid.org/0000-0002-8347-5305"
- family-names: "Vacanti"
given-names: "Giovanni"
- family-names: "Coca"
given-names: "Alexandru"
journal: "Journal of Machine Learning Research"
month: 6
start: 1 # First page number
end: 7 # Last page number
title: "Alibi Explain: Algorithms for Explaining Machine Learning Models"
issue: 181
volume: 22
year: 2021
url: http://jmlr.org/papers/v22/21-0017.html
GitHub Events
Total
- Issues event: 5
- Watch event: 154
- Delete event: 16
- Issue comment event: 45
- Push event: 19
- Pull request review event: 12
- Pull request review comment event: 11
- Pull request event: 40
- Fork event: 11
- Create event: 14
Last Year
- Issues event: 5
- Watch event: 154
- Delete event: 16
- Issue comment event: 45
- Push event: 19
- Pull request review event: 12
- Pull request review comment event: 11
- Pull request event: 40
- Fork event: 11
- Create event: 14
Committers
Last synced: almost 3 years ago
Top Committers
| Name | Commits | |
|---|---|---|
| Janis Klaise | jk@s****o | 272 |
| Janis Klaise | j****e@g****m | 61 |
| RobertSamoilescu | r****u@g****m | 55 |
| dependabot[bot] | 4****]@u****m | 46 |
| Ashley Scillitoe | a****e@s****o | 39 |
| mauicv | a****e@s****o | 36 |
| alexcoca | a****3@y****k | 18 |
| arnaudvl | a****l@s****o | 16 |
| giovac73 | g****s@g****m | 14 |
| Ashley Scillitoe | a****e@g****m | 10 |
| Marco Gorelli | 3****i@u****m | 3 |
| Alex Housley | ah@s****o | 1 |
| Christopher Samiullah | C****S@u****m | 1 |
| Adrian Gonzalez-Martin | a****m@s****o | 1 |
| Sanja Simonovikj | s****s@y****m | 1 |
| James Budarz | j****z@g****m | 1 |
| abs428 | 2****8@u****m | 1 |
| Vincent Xie | v****h@g****m | 1 |
| oscarfco | 5****o@u****m | 1 |
| mauicv | a****e@g****m | 1 |
| Marco Gorelli | m****i@g****m | 1 |
| David de la Iglesia Castro | d****o@g****m | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 9 months ago
All Time
- Total issues: 51
- Total pull requests: 179
- Average time to close issues: 2 months
- Average time to close pull requests: 2 months
- Total issue authors: 25
- Total pull request authors: 13
- Average comments per issue: 1.53
- Average comments per pull request: 2.03
- Merged pull requests: 119
- Bot issues: 0
- Bot pull requests: 78
Past Year
- Issues: 4
- Pull requests: 30
- Average time to close issues: N/A
- Average time to close pull requests: 3 months
- Issue authors: 4
- Pull request authors: 3
- Average comments per issue: 0.5
- Average comments per pull request: 1.47
- Merged pull requests: 7
- Bot issues: 0
- Bot pull requests: 22
Top Authors
Issue Authors
- jklaise (12)
- RobertSamoilescu (7)
- ascillitoe (6)
- owl0695 (3)
- mauicv (2)
- Himanshu-1988 (2)
- LakshmanKishore (1)
- HevOHel (1)
- zlds123 (1)
- CodeSmileBot (1)
- fraseralex96 (1)
- hyejinhahihong (1)
- shrija2901 (1)
- PoplarTN (1)
- pranavn91 (1)
Pull Request Authors
- dependabot[bot] (112)
- jklaise (49)
- RobertSamoilescu (24)
- mauicv (14)
- jesse-c (7)
- ascillitoe (6)
- Rajakavitha1 (4)
- majolo (2)
- tanaysd (1)
- LakshmanKishore (1)
- paulb-seldon (1)
- KGKallasmaa (1)
- badcount (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 13,482 last-month
- Total docker downloads: 6,197
-
Total dependent packages: 8
(may contain duplicates) -
Total dependent repositories: 116
(may contain duplicates) - Total versions: 38
- Total maintainers: 4
pypi.org: alibi
Algorithms for monitoring and explaining machine learning models
- Homepage: https://github.com/SeldonIO/alibi
- Documentation: https://alibi.readthedocs.io/
- License: Business Source License 1.1
-
Latest release: 0.9.6
published almost 2 years ago
Rankings
Maintainers (4)
conda-forge.org: alibi
[Alibi](https://docs.seldon.io/projects/alibi) is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models. - [Documentation](https://docs.seldon.io/projects/alibi/en/latest/) If you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project [alibi-detect](https://github.com/SeldonIO/alibi-detect). PyPI: [https://pypi.org/project/alibi/](https://pypi.org/project/alibi/)
- Homepage: https://github.com/SeldonIO/alibi
- License: Apache-2.0
-
Latest release: 0.7.0
published almost 4 years ago
Rankings
Dependencies
- actions/checkout v3 composite
- actions/setup-python v4 composite
- codecov/codecov-action v3 composite
- mxschmitt/action-tmate v3 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- tj-actions/changed-files v1.1.2 composite
- catboost >=1.0.0,<2.0.0 development
- flake8 >=3.7.7,<7.0.0 development
- ipykernel >=5.1.0,<7.0.0 development
- jupytext >=1.12.0,<2.0.0 development
- mypy >=1.0,<2.0 development
- nbconvert >=6.0.7,<8.0.0 development
- pre-commit >=1.20.0,<4.0.0 development
- pytest >=5.3.5,<8.0.0 development
- pytest-cov >=2.6.1,<5.0.0 development
- pytest-custom_exit_code >=0.3.0 development
- pytest-lazy-fixture >=0.6.3,<0.7.0 development
- pytest-mock >=3.10.0,<4.0.0 development
- pytest-timeout >=1.4.2,<3.0.0 development
- pytest-xdist >=1.28.0,<4.0.0 development
- torch >=1.9.0,<3.0.0 development
- tox >=3.21.0,<5.0.0 development
- twine >3.2.0,<5.0.0 development
- types-requests >=2.25.0,<3.0.0 development
- ipykernel >=5.1.0,<7.0.0
- ipython >=7.2.0,<9.0.0
- myst-parser >=1.0,<3.0
- nbsphinx >=0.8.5,<0.10.0
- sphinx >=4.2.0,<8.0.0
- sphinx-rtd-theme >=1.0.0,<2.0.0
- sphinx_design ==0.5.0
- sphinxcontrib-apidoc >=0.3.0,<0.5.0
- typing-extensions >=3.7.4.3
- numpy >=1.16.2,
- pandas >=1.0.0,
- scikit-learn >=1.0.0,
- spacy *
- ipywidgets >=7.6 test
- seaborn >=0.9.0 test
- xgboost >=0.90 test
- actions/checkout v4 composite
- actions/checkout v3 composite
- actions/setup-python v5 composite
- snyk/actions/python-3.10 master composite