surrogates-tutorial
What and How of Machine Learning Transparency – ECML-PKDD 2020 Hands-on Tutorial
Science Score: 67.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 8 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org, zenodo.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.0%) to scientific vocabulary
Keywords
Repository
What and How of Machine Learning Transparency – ECML-PKDD 2020 Hands-on Tutorial
Basic Info
- Host: GitHub
- Owner: fat-forensics
- License: other
- Language: Jupyter Notebook
- Default Branch: master
- Homepage: https://events.fat-forensics.org/2020_ecml-pkdd
- Size: 26.7 MB
Statistics
- Stars: 0
- Watchers: 4
- Forks: 0
- Open Issues: 0
- Releases: 2
Topics
Metadata Files
README.md
What and How of Machine Learning Transparency
Building Bespoke Explainability Tools with Interoperable Algorithmic Components
Explainability techniques for data-driven predictive models based on artificial intelligence and machine learning algorithms allow us to better understand the operation of such systems and hold them accountable[^1]. New transparency approaches are therefore developed at breakneck speed to peek inside these black boxes and interpret their decisions. Many of these techniques are introduced as monolithic tools, giving the impression of one-size-fits-all and end-to-end algorithms with limited customisability. However, such approaches are often composed of multiple interchangeable modules that need to be tuned to the problem at hand to produce meaningful explanations[^2]. This repository holds a collection of interactive, hands-on training materials (offered as Jupyter Notebooks) that provide guidance through the process of building and evaluating bespoke modular surrogate explainers for black-box predictions of tabular data. These resources cover the three core building blocks of this technique introduced by the bLIMEy meta-algorithm: interpretable representation composition, data sampling and explanation generation[^2].
The following materials are available (follow the links to see their respective descriptions):
- Hands-on Resources (Jupyter Notebooks) – notebooks directory.
- Presentation Slides – slides directory.
- Video Recordings – YouTube playlist.
These resources were used to deliver a hands-on tutorial at ECML-PKDD 2020; see https://events.fat-forensics.org/2020_ecml-pkdd for more details. Alternatively, see the paper published in the Journal of Open Source Education. The notebooks can either be launched online (via MyBinder or Google Colab) or on a personal machine (Python 3.7 or higher is required). The latter can be achieved with the following steps:
- Clone this repository.
bash git clone --depth 1 https://github.com/fat-forensics/Surrogates-Tutorial.git - Install Python dependencies.
bash pip install -r notebooks/requirements.txt - Launch Jupyter Lab.
bash jupyter lab - Navigate to the notebooks directory and open the desired notebook.
Note that code is licenced under BSD 3-Clause, and
text is covered by CC BY-NC-SA 4.0.
The CONTRIBUTING.md file provides contribution guidelines.
To reference this repository and the training materials it provides please use:
bibtex
@article{sokol2022what,
title={What and How of Machine Learning Transparency:
{Building} Bespoke Explainability Tools with Interoperable
Algorithmic Components},
author={Sokol, Kacper and Hepburn, Alexander and
Santos-Rodriguez, Raul and Flach, Peter},
journal={Journal of Open Source Education},
volume={5},
number={58},
pages={175},
publisher={The Open Journal},
year={2022},
doi={10.21105/jose.00175},
url={https://events.fat-forensics.org/2020_ecml-pkdd}
}
or refer to the CITATION.cff file.
[^1]: Sokol, K., & Flach, P. (2021). Explainability is in the mind of the beholder: Establishing the foundations of explainable artificial intelligence. arXiv Preprint arXiv:2112.14466. https://doi.org/10.48550/arXiv.2112.14466 [^2]: Sokol, K., Hepburn, A., Santos-Rodriguez, R., & Flach, P. (2019). bLIMEy: Surrogate prediction explanations beyond LIME. Workshop on Human-Centric Machine Learning (HCML 2019) at the 33rd Conference on Neural Information Processing Systems (NeurIPS). https://doi.org/10.48550/arXiv.1910.13016
Owner
- Name: FAT Forensics
- Login: fat-forensics
- Kind: organization
- Repositories: 7
- Profile: https://github.com/fat-forensics
Citation (CITATION.cff)
cff-version: 1.2.0
message: "If you use these training materials, please cite it as below."
authors:
- family-names: "Sokol"
given-names: "Kacper"
orcid: "https://orcid.org/0000-0002-9869-5896"
- family-names: "Hepburn"
given-names: "Alexander"
orcid: "https://orcid.org/0000-0002-2674-1478"
- family-names: "Santos-Rodriguez"
given-names: "Raul"
orcid: "https://orcid.org/0000-0001-9576-3905"
- family-names: "Flach"
given-names: "Peter"
orcid: "https://orcid.org/0000-0001-6857-5810"
title: "What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components"
version: 1.0
doi: 10.21105/jose.00175
date-released: 2022-12-06
url: "https://github.com/fat-forensics/Surrogates-Tutorial"
preferred-citation:
type: article
title: "What and How of Machine Learning Transparency: Building Bespoke Explainability Tools with Interoperable Algorithmic Components"
journal: "Journal of Open Source Education"
volume: 5
number: "58"
pages: 175
authors:
- family-names: "Sokol"
given-names: "Kacper"
orcid: "https://orcid.org/0000-0002-9869-5896"
- family-names: "Hepburn"
given-names: "Alexander"
orcid: "https://orcid.org/0000-0002-2674-1478"
- family-names: "Santos-Rodriguez"
given-names: "Raul"
orcid: "https://orcid.org/0000-0001-9576-3905"
- family-names: "Flach"
given-names: "Peter"
orcid: "https://orcid.org/0000-0001-6857-5810"
year: 2022
doi: 10.21105/jose.00175
url: "https://events.fat-forensics.org/2020_ecml-pkdd"
GitHub Events
Total
Last Year
Dependencies
- actions/checkout v2 composite
- actions/upload-artifact v1 composite
- openjournals/openjournals-draft-action master composite
- actions/checkout v2.3.1 composite
- actions/setup-python v1 composite
- fat-forensics ==0.1.0
- jupyterlab *
- matplotlib ==3.3.0
- numpy ==1.21.5
- requests *
- scikit-learn ==1.0.2
- scipy ==1.7.3
- ipython >=7.23.1
- nbval ==0.9.1
- pytest ==3.9.1