alibi-detect
Algorithms for outlier, adversarial and drift detection
Science Score: 64.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org, ieee.org, zenodo.org -
✓Committers with academic emails
1 of 26 committers (3.8%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.5%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
Algorithms for outlier, adversarial and drift detection
Basic Info
- Host: GitHub
- Owner: SeldonIO
- License: other
- Language: Jupyter Notebook
- Default Branch: master
- Homepage: https://docs.seldon.io/projects/alibi-detect/en/stable/
- Size: 35.3 MB
Statistics
- Stars: 2,417
- Watchers: 39
- Forks: 235
- Open Issues: 141
- Releases: 35
Topics
Metadata Files
README.md
Alibi Detect is a source-available Python library focused on outlier, adversarial and drift detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both TensorFlow and PyTorch backends are supported for drift detection. * Documentation
For more background on the importance of monitoring outliers and distributions in a production setting, check out this talk from the Challenges in Deploying and Monitoring Machine Learning Systems ICML 2020 workshop, based on the paper Monitoring and explainability of models in production and referencing Alibi Detect.
For a thorough introduction to drift detection, check out Protecting Your Machine Learning Against Drift: An Introduction. The talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector.
Table of Contents
Installation and Usage
The package, alibi-detect can be installed from:
- PyPI or GitHub source (with
pip) - Anaconda (with
conda/mamba)
With pip
alibi-detect can be installed from PyPI:
bash pip install alibi-detectAlternatively, the development version can be installed:
bash pip install git+https://github.com/SeldonIO/alibi-detect.gitTo install with the TensorFlow backend:
bash pip install alibi-detect[tensorflow]To install with the PyTorch backend:
bash pip install alibi-detect[torch]To install with the KeOps backend:
bash pip install alibi-detect[keops]To use the
Prophettime series outlier detector:
bash
pip install alibi-detect[prophet]
With conda
To install from conda-forge it is recommended to use mamba, which can be installed to the base conda enviroment with:
bash
conda install mamba -n base -c conda-forge
To install alibi-detect:
bash
mamba install -c conda-forge alibi-detect
Usage
We will use the VAE outlier detector to illustrate the API.
```python from alibidetect.od import OutlierVAE from alibidetect.saving import savedetector, loaddetector
initialize and fit detector
od = OutlierVAE(threshold=0.1, encodernet=encodernet, decodernet=decodernet, latentdim=1024) od.fit(xtrain)
make predictions
preds = od.predict(x_test)
save and load detectors
filepath = './mydetector/' savedetector(od, filepath) od = load_detector(filepath) ```
The predictions are returned in a dictionary with as keys meta and data. meta contains the detector's metadata while data is in itself a dictionary with the actual predictions. It contains the outlier, adversarial or drift scores and thresholds as well as the predictions whether instances are e.g. outliers or not. The exact details can vary slightly from method to method, so we encourage the reader to become familiar with the types of algorithms supported.
Supported Algorithms
The following tables show the advised use cases for each algorithm. The column Feature Level indicates whether the detection can be done at the feature level, e.g. per pixel for an image. Check the algorithm reference list for more information with links to the documentation and original papers as well as examples for each of the detectors.
Outlier Detection
| Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------|:-------:|:-----:|:-----------:|:----:|:--------------------:|:------:|:-------------:| | Isolation Forest | ✔ | | | | ✔ | | | | Mahalanobis Distance | ✔ | | | | ✔ | ✔ | | | AE | ✔ | ✔ | | | | | ✔ | | VAE | ✔ | ✔ | | | | | ✔ | | AEGMM | ✔ | ✔ | | | | | | | VAEGMM | ✔ | ✔ | | | | | | | Likelihood Ratios | ✔ | ✔ | ✔ | | ✔ | | ✔ | | Prophet | | | ✔ | | | | | | Spectral Residual | | | ✔ | | | ✔ | ✔ | | Seq2Seq | | | ✔ | | | | ✔ |
Adversarial Detection
| Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | | :--- | :---: | :---: |:-----------:|:----:|:--------------------:|:------:|:-------------:| | Adversarial AE | ✔ | ✔ | | | | | | | Model distillation | ✔ | ✔ | ✔ | ✔ | ✔ | | |
Drift Detection
| Detector | Tabular | Image | Time Series | Text | Categorical Features | Online | Feature Level | |:---------------------------------| :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Kolmogorov-Smirnov | ✔ | ✔ | | ✔ | ✔ | | ✔ | | Cramér-von Mises | ✔ | ✔ | | | | ✔ | ✔ | | Fisher's Exact Test | ✔ | | | | ✔ | ✔ | ✔ | | Maximum Mean Discrepancy (MMD) | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Learned Kernel MMD | ✔ | ✔ | | ✔ | ✔ | | | | Context-aware MMD | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Least-Squares Density Difference | ✔ | ✔ | | ✔ | ✔ | ✔ | | | Chi-Squared | ✔ | | | | ✔ | | ✔ | | Mixed-type tabular data | ✔ | | | | ✔ | | ✔ | | Classifier | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Spot-the-diff | ✔ | ✔ | ✔ | ✔ | ✔ | | ✔ | | Classifier Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | | | Regressor Uncertainty | ✔ | ✔ | ✔ | ✔ | ✔ | | |
TensorFlow and PyTorch support
The drift detectors support TensorFlow, PyTorch and (where applicable) KeOps backends. However, Alibi Detect does not install these by default. See the installation options for more details.
```python from alibi_detect.cd import MMDDrift
cd = MMDDrift(xref, backend='tensorflow', pval=.05) preds = cd.predict(x) ```
The same detector in PyTorch:
python
cd = MMDDrift(x_ref, backend='pytorch', p_val=.05)
preds = cd.predict(x)
Or in KeOps:
python
cd = MMDDrift(x_ref, backend='keops', p_val=.05)
preds = cd.predict(x)
Built-in preprocessing steps
Alibi Detect also comes with various preprocessing steps such as randomly initialized encoders, pretrained text embeddings to detect drift on using the transformers library and extraction of hidden layers from machine learning models. This allows to detect different types of drift such as covariate and predicted distribution shift. The preprocessing steps are again supported in TensorFlow and PyTorch.
```python from alibidetect.cd.tensorflow import HiddenOutput, preprocessdrift
model = # TensorFlow model; tf.keras.Model or tf.keras.Sequential preprocessfn = partial(preprocessdrift, model=HiddenOutput(model, layer=-1), batchsize=128) cd = MMDDrift(xref, backend='tensorflow', pval=.05, preprocessfn=preprocess_fn) preds = cd.predict(x) ```
Check the example notebooks (e.g. CIFAR10, movie reviews) for more details.
Reference List
Outlier Detection
Isolation Forest (FT Liu et al., 2008)
- Example: Network Intrusion
Mahalanobis Distance (Mahalanobis, 1936)
- Example: Network Intrusion
-
- Example: CIFAR10
Variational Auto-Encoder (VAE) (Kingma et al., 2013)
- Examples: Network Intrusion, CIFAR10
Auto-Encoding Gaussian Mixture Model (AEGMM) (Zong et al., 2018)
- Example: Network Intrusion
Variational Auto-Encoding Gaussian Mixture Model (VAEGMM)
- Example: Network Intrusion
Likelihood Ratios (Ren et al., 2019)
- Examples: Genome, Fashion-MNIST vs. MNIST
Prophet Time Series Outlier Detector (Taylor et al., 2018)
- Example: Weather Forecast
Spectral Residual Time Series Outlier Detector (Ren et al., 2019)
- Example: Synthetic Dataset
Sequence-to-Sequence (Seq2Seq) Outlier Detector (Sutskever et al., 2014; Park et al., 2017)
- Examples: ECG, Synthetic Dataset
Adversarial Detection
Adversarial Auto-Encoder (Vacanti and Van Looveren, 2020)
- Example: CIFAR10
-
- Example: CIFAR10
Drift Detection
-
- Example: CIFAR10, molecular graphs, movie reviews
-
- Example: Penguins
-
- Example: Penguins
Maximum Mean Discrepancy (Gretton et al, 2012)
- Example: CIFAR10, molecular graphs, movie reviews, Amazon reviews
Learned Kernel MMD (Liu et al, 2020)
- Example: CIFAR10
Context-aware MMD (Cobb and Van Looveren, 2022)
- Example: ECG, news topics
-
- Example: Income Prediction
-
- Example: Income Prediction
Classifier (Lopez-Paz and Oquab, 2017)
- Example: CIFAR10, Amazon reviews
Spot-the-diff (adaptation of Jitkrittum et al, 2016)
- Example MNIST and Wine quality
Classifier and Regressor Uncertainty
- Example: CIFAR10 and Wine, molecular graphs
Online Maximum Mean Discrepancy
- Example: Wine Quality, Camelyon medical imaging
Online Least-Squares Density Difference (Bu et al, 2017)
- Example: Wine Quality
Datasets
The package also contains functionality in alibi_detect.datasets to easily fetch a number of datasets for different modalities. For each dataset either the data and labels or a Bunch object with the data, labels and optional metadata are returned. Example:
```python from alibidetect.datasets import fetchecg
(Xtrain, ytrain), (Xtest, ytest) = fetchecg(returnX_y=True) ```
Sequential Data and Time Series
- Genome Dataset:
fetch_genome- Bacteria genomics dataset for out-of-distribution detection, released as part of Likelihood Ratios for Out-of-Distribution Detection. From the original TL;DR: The dataset contains genomic sequences of 250 base pairs from 10 in-distribution bacteria classes for training, 60 OOD bacteria classes for validation, and another 60 different OOD bacteria classes for test. There are respectively 1, 7 and again 7 million sequences in the training, validation and test sets. For detailed info on the dataset check the README.
```python from alibidetect.datasets import fetchgenome
(Xtrain, ytrain), (Xval, yval), (Xtest, ytest) = fetchgenome(returnX_y=True) ```
ECG 5000:
fetch_ecg- 5000 ECG's, originally obtained from Physionet.
NAB:
fetch_nab- Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using
alibi_detect.datasets.get_list_nab().
- Any univariate time series in a DataFrame from the Numenta Anomaly Benchmark. A list with the available time series can be retrieved using
Images
- CIFAR-10-C:
fetch_cifar10c- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
fetch_cifar10callows you to pick any severity level or corruption type. The list with available corruption types can be retrieved withalibi_detect.datasets.corruption_types_cifar10c(). The dataset can be used in research on robustness and drift. The original data can be found here. Example:
- CIFAR-10-C (Hendrycks & Dietterich, 2019) contains the test set of CIFAR-10, but corrupted and perturbed by various types of noise, blur, brightness etc. at different levels of severity, leading to a gradual decline in a classification model's performance trained on CIFAR-10.
```python from alibidetect.datasets import fetchcifar10c
corruption = ['gaussiannoise', 'motionblur', 'brightness', 'pixelate'] X, y = fetchcifar10c(corruption=corruption, severity=5, returnX_y=True) ```
- Adversarial CIFAR-10:
fetch_attack- Load adversarial instances on a ResNet-56 classifier trained on CIFAR-10. Available attacks: Carlini-Wagner ('cw') and SLIDE ('slide'). Example:
```python from alibidetect.datasets import fetchattack
(Xtrain, ytrain), (Xtest, ytest) = fetchattack('cifar10', 'resnet56', 'cw', returnX_y=True) ```
Tabular
- KDD Cup '99:
fetch_kdd- Dataset with different types of computer network intrusions.
fetch_kddallows you to select a subset of network intrusions as targets or pick only specified features. The original data can be found here.
- Dataset with different types of computer network intrusions.
Models
Models and/or building blocks that can be useful outside of outlier, adversarial or drift detection can be found under alibi_detect.models. Main implementations:
PixelCNN++:
alibi_detect.models.pixelcnn.PixelCNNVariational Autoencoder:
alibi_detect.models.autoencoder.VAESequence-to-sequence model:
alibi_detect.models.autoencoder.Seq2SeqResNet:
alibi_detect.models.resnet- Pre-trained ResNet-20/32/44 models on CIFAR-10 can be found on our Google Cloud Bucket and can be fetched as follows:
```python from alibidetect.utils.fetching import fetchtf_model
model = fetchtfmodel('cifar10', 'resnet32') ```
Integrations
Alibi-detect is integrated in the machine learning model deployment platform Seldon Core and model serving framework KFServing.
Citations
If you use alibi-detect in your research, please consider citing it.
BibTeX entry:
@software{alibi-detect,
title = {Alibi Detect: Algorithms for outlier, adversarial and drift detection},
author = {Van Looveren, Arnaud and Klaise, Janis and Vacanti, Giovanni and Cobb, Oliver and Scillitoe, Ashley and Samoilescu, Robert and Athorne, Alex},
url = {https://github.com/SeldonIO/alibi-detect},
version = {0.12.1.dev0},
date = {2024-04-17},
year = {2019}
}
Owner
- Name: Seldon
- Login: SeldonIO
- Kind: organization
- Email: hello@seldon.io
- Location: London / Cambridge
- Website: https://seldon.io
- Repositories: 40
- Profile: https://github.com/SeldonIO
Machine Learning Deployment for Kubernetes
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it as below." authors: - family-names: "Van Looveren" given-names: "Arnaud" orcid: "https://orcid.org/0000-0002-8347-5305" - family-names: "Klaise" given-names: "Janis" orcid: "https://orcid.org/0000-0002-7774-8047" - family-names: "Vacanti" given-names: "Giovanni" - family-names: "Cobb" given-names: "Oliver" - family-names: "Scillitoe" given-names: "Ashley" orcid: "https://orcid.org/0000-0001-8971-7224" - family-names: "Samoilescu" given-names: "Robert" - family-names: "Athorne" given-names: "Alex" title: "Alibi Detect: Algorithms for outlier, adversarial and drift detection" version: 0.12.0 date-released: 2024-04-17 url: "https://github.com/SeldonIO/alibi-detect"
GitHub Events
Total
- Create event: 15
- Issues event: 2
- Watch event: 186
- Delete event: 14
- Member event: 3
- Issue comment event: 40
- Push event: 28
- Pull request review comment event: 10
- Pull request event: 50
- Pull request review event: 16
- Fork event: 11
Last Year
- Create event: 15
- Issues event: 2
- Watch event: 186
- Delete event: 14
- Member event: 3
- Issue comment event: 40
- Push event: 28
- Pull request review comment event: 10
- Pull request event: 50
- Pull request review event: 16
- Fork event: 11
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Arnaud Van Looveren | a****l@s****o | 221 |
| Ashley Scillitoe | a****e@s****o | 158 |
| Janis Klaise | jk@s****o | 109 |
| dependabot[bot] | 4****] | 68 |
| mauicv | a****e@s****o | 34 |
| cliveseldon | cc@s****o | 26 |
| Oliver Cobb | 5****b | 17 |
| RobertSamoilescu | r****u@g****m | 12 |
| Rajie Kodhandapani | r****i@s****o | 8 |
| giovac73 | g****s@g****m | 7 |
| Jesse Claven | j****n@m****m | 6 |
| Mikhail Mishin | m****x@g****m | 2 |
| Lakshman | 6****e | 2 |
| N1m6 | a****b@g****m | 2 |
| Kumar Utsav | k****v@g****m | 1 |
| Max Lowther | ml@s****o | 1 |
| Ryan Dawson | r****n@c****t | 1 |
| SangamSwadik | 3****K | 1 |
| Sherif Akoush | sa@s****o | 1 |
| Tom | 5****k | 1 |
| earthgecko | 9****o | 1 |
| kuromt | n****u@g****m | 1 |
| mbrner | m****r@t****e | 1 |
| paulb-seldon | 1****n | 1 |
| signupatgmx | 5****x | 1 |
| tmisirpash | 6****h | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 74
- Total pull requests: 185
- Average time to close issues: 2 months
- Average time to close pull requests: 3 months
- Total issue authors: 33
- Total pull request authors: 19
- Average comments per issue: 1.39
- Average comments per pull request: 2.62
- Merged pull requests: 115
- Bot issues: 1
- Bot pull requests: 78
Past Year
- Issues: 4
- Pull requests: 38
- Average time to close issues: about 1 month
- Average time to close pull requests: 24 days
- Issue authors: 3
- Pull request authors: 7
- Average comments per issue: 0.75
- Average comments per pull request: 1.32
- Merged pull requests: 14
- Bot issues: 1
- Bot pull requests: 21
Top Authors
Issue Authors
- mauicv (18)
- ascillitoe (10)
- jklaise (5)
- arnaudvl (4)
- KevinRyu (4)
- righelcpm (2)
- dependabot[bot] (2)
- sfo (2)
- Sandy4321 (1)
- gjy688 (1)
- nys3015 (1)
- slowdive42 (1)
- amrit110 (1)
- nathan-vo810 (1)
- Gaurav-Sahu-TA (1)
Pull Request Authors
- dependabot[bot] (97)
- ascillitoe (34)
- mauicv (29)
- jklaise (18)
- RobertSamoilescu (12)
- Rajakavitha1 (7)
- jesse-c (6)
- paulb-seldon (2)
- Srceh (2)
- LakshmanKishore (2)
- majolo (2)
- earthgecko (2)
- ramonpzg (2)
- sakoush (2)
- tomglk (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 3
-
Total downloads:
- pypi 127,992 last-month
- Total docker downloads: 1,838
-
Total dependent packages: 4
(may contain duplicates) -
Total dependent repositories: 81
(may contain duplicates) - Total versions: 80
- Total maintainers: 5
pypi.org: alibi-detect
Algorithms for outlier detection, concept drift and metrics.
- Homepage: https://github.com/SeldonIO/alibi-detect
- Documentation: https://alibi-detect.readthedocs.io/
- License: Business Source License 1.1
-
Latest release: 0.12.0
published almost 2 years ago
Rankings
Maintainers (5)
proxy.golang.org: github.com/seldonio/alibi-detect
- Documentation: https://pkg.go.dev/github.com/seldonio/alibi-detect#section-documentation
- License: other
-
Latest release: v0.12.0
published almost 2 years ago
Rankings
conda-forge.org: alibi-detect
[Alibi Detect](https://github.com/SeldonIO/alibi-detect) is an open source Python library focused on **outlier**, **adversarial** and **drift** detection. The package aims to cover both online and offline detectors for tabular data, text, images and time series. Both **TensorFlow** and **PyTorch** backends are supported for drift detection. - [Documentation](https://docs.seldon.io/projects/alibi-detect/en/latest/) For more background on the importance of monitoring outliers and distributions in a production setting, check out [this talk](https://slideslive.com/38931758/monitoring-and-explainability-of-models-in-production?ref=speaker-37384-latest) from the *Challenges in Deploying and Monitoring Machine Learning Systems* ICML 2020 workshop, based on the paper [Monitoring and explainability of models in production](https://arxiv.org/abs/2007.06299) and referencing Alibi Detect. For a thorough introduction to drift detection, check out [Protecting Your Machine Learning Against Drift: An Introduction](https://youtu.be/tL5sEaQha5o). he talk covers what drift is and why it pays to detect it, the different types of drift, how it can be detected in a principled manner and also describes the anatomy of a drift detector. PyPI: [https://pypi.org/project/alibi-detect/](https://pypi.org/project/alibi-detect/)
- Homepage: https://github.com/SeldonIO/alibi-detect
- License: Apache-2.0
-
Latest release: 0.10.4
published over 3 years ago
Rankings
Dependencies
- flake8 >=3.7.7,<6.0.0 development
- ipykernel >=5.1.0,<7.0.0 development
- ipywidgets >=7.6.5,<8.0.0 development
- jupytext >=1.12.0,<2.0.0 development
- mypy * development
- nbconvert >=6.0.7,<7.0.0 development
- packaging >=19.0,<22.0 development
- pre-commit >=1.20.0,<3.0.0 development
- pytest >=5.3.5,<8.0.0 development
- pytest-cov >=2.6.1,<4.0.0 development
- pytest-custom_exit_code >=0.3.0 development
- pytest-randomly >=3.5.0,<4.0.0 development
- pytest-timeout >=1.4.2,<3.0.0 development
- pytest-xdist >=1.28.0,<3.0.0 development
- pytest_cases >=3.6.8,<4.0.0 development
- tox >=3.21.0,<4.0.0 development
- twine >3.2.0,<4.0.0 development
- types-requests >=2.25,<3.0 development
- types-toml >=0.10,<1.0 development
- ipykernel >=5.1.0,<7.0.0
- ipython >=7.2.0,<9.0.0
- matplotlib >=3.0.0,<4.0.0
- myst-parser >=0.14,<0.19
- nbsphinx >=0.8.5,<0.9.0
- numpy >=1.16.2,<2.0.0
- pandas >=0.23.3,<2.0.0
- sphinx >=4.2.0,<5.1.0
- sphinx-autodoc-typehints >=1.12.0,<2.0.0
- sphinx-rtd-theme >=1.0.0,<2.0.0
- sphinx_design ==0.2.0
- sphinxcontrib-apidoc >=0.3.0,<0.4.0
- sphinxcontrib-bibtex >=2.1.0,<3.0.0
- Pillow >=5.4.1,
- catalogue >=2.0.0,
- dill >=0.3.0,
- matplotlib >=3.0.0,
- numba >=0.50.0,
- numpy >=1.16.2,
- opencv-python >=3.2.0,
- pandas >=0.23.3,
- pydantic >=1.8.0,
- requests >=2.21.0,
- scikit-image >=0.14.2,
- scikit-learn >=0.20.2,
- scipy >=1.3.0,
- toml >=0.10.1,
- tqdm >=4.28.1,
- transformers >=4.0.0,
- typing-extensions >=3.7.4.3
- nlp >=0.3.0 test
- palmerpenguins >=0.1.1 test
- seaborn >=0.9.0 test
- torchvision >=0.8.0 test
- actions/checkout v3 composite
- actions/setup-python v4 composite
- codecov/codecov-action v3 composite
- mxschmitt/action-tmate v3 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- actions/checkout v3 composite
- actions/setup-python v4 composite
- tj-actions/changed-files v1.1.2 composite