aif360

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

https://github.com/Trusted-AI/AIF360

Science Score: 46.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
    Found 9 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, springer.com
  • Committers with academic emails
    3 of 72 committers (4.2%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.8%) to scientific vocabulary

Keywords

ai artificial-intelligence bias bias-correction bias-detection bias-finder bias-reduction codait deep-learning discrimination fairness fairness-ai fairness-awareness-model fairness-testing ibm-research ibm-research-ai machine-learning python r trusted-ai

Keywords from Contributors

distributed deep-neural-networks jax transformer genome threading bioinformatics cryptocurrency cryptography audio
Last synced: 6 months ago · JSON representation

Repository

A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.

Basic Info
  • Host: GitHub
  • Owner: Trusted-AI
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage: https://aif360.res.ibm.com/
  • Size: 6.52 MB
Statistics
  • Stars: 2,641
  • Watchers: 90
  • Forks: 884
  • Open Issues: 210
  • Releases: 0
Topics
ai artificial-intelligence bias bias-correction bias-detection bias-finder bias-reduction codait deep-learning discrimination fairness fairness-ai fairness-awareness-model fairness-testing ibm-research ibm-research-ai machine-learning python r trusted-ai
Created over 7 years ago · Last pushed about 1 year ago
Metadata Files
Readme Contributing License

README.md

AI Fairness 360 (AIF360)

Continuous Integration Documentation PyPI version CRAN\_Status\_Badge

The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle. AI Fairness 360 package is available in both Python and R.

The AI Fairness 360 package includes 1) a comprehensive set of metrics for datasets and models to test for biases, 2) explanations for these metrics, and 3) algorithms to mitigate bias in datasets and models. It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. We invite you to use it and improve it.

The AI Fairness 360 interactive experience provides a gentle introduction to the concepts and capabilities. The tutorials and other notebooks offer a deeper, data scientist-oriented introduction. The complete API is also available.

Being a comprehensive set of capabilities, it may be confusing to figure out which metrics and algorithms are most appropriate for a given use case. To help, we have created some guidance material that can be consulted.

We have developed the package with extensibility in mind. This library is still in development. We encourage the contribution of your metrics, explainers, and debiasing algorithms.

Get in touch with us on Slack (invitation here)!

Supported bias mitigation algorithms

Supported fairness metrics

  • Comprehensive set of group fairness metrics derived from selection rates and error rates including rich subgroup fairness
  • Comprehensive set of sample distortion metrics
  • Generalized Entropy Index (Speicher et al., 2018)
  • Differential Fairness and Bias Amplification (Foulds et al., 2018)
  • Bias Scan with Multi-Dimensional Subset Scan (Zhang, Neill, 2017)

Setup

R

r install.packages("aif360")

For more details regarding the R setup, please refer to instructions here.

Python

Supported Python Configurations:

| OS | Python version | | ------- | -------------- | | macOS | 3.8 – 3.11 | | Ubuntu | 3.8 – 3.11 | | Windows | 3.8 – 3.11 |

(Optional) Create a virtual environment

AIF360 requires specific versions of many Python packages which may conflict with other projects on your system. A virtual environment manager is strongly recommended to ensure dependencies may be installed safely. If you have trouble installing AIF360, try this first.

Conda

Conda is recommended for all configurations though Virtualenv is generally interchangeable for our purposes. Miniconda is sufficient (see the difference between Anaconda and Miniconda if you are curious) if you do not already have conda installed.

Then, to create a new Python 3.11 environment, run:

bash conda create --name aif360 python=3.11 conda activate aif360

The shell should now look like (aif360) $. To deactivate the environment, run:

bash (aif360)$ conda deactivate

The prompt will return to $.

Install with pip

To install the latest stable version from PyPI, run:

bash pip install aif360

Note: Some algorithms require additional dependencies (although the metrics will all work out-of-the-box). To install with certain algorithm dependencies included, run, e.g.:

bash pip install 'aif360[LFR,OptimPreproc]'

or, for complete functionality, run:

bash pip install 'aif360[all]'

The options for available extras are: OptimPreproc, LFR, AdversarialDebiasing, DisparateImpactRemover, LIME, ART, Reductions, FairAdapt, inFairness, LawSchoolGPA, notebooks, tests, docs, all

If you encounter any errors, try the Troubleshooting steps.

Manual installation

Clone the latest version of this repository:

bash git clone https://github.com/Trusted-AI/AIF360

If you'd like to run the examples, download the datasets now and place them in their respective folders as described in aif360/data/README.md.

Then, navigate to the root directory of the project and run:

bash pip install --editable '.[all]'

Run the Examples

To run the example notebooks, complete the manual installation steps above. Then, if you did not use the [all] option, install the additional requirements as follows:

bash pip install -e '.[notebooks]'

Finally, if you did not already, download the datasets as described in aif360/data/README.md.

Troubleshooting

If you encounter any errors during the installation process, look for your issue here and try the solutions.

TensorFlow

See the Install TensorFlow with pip page for detailed instructions.

Note: we require 'tensorflow >= 1.13.1'.

Once tensorflow is installed, try re-running:

bash pip install 'aif360[AdversarialDebiasing]'

TensorFlow is only required for use with the aif360.algorithms.inprocessing.AdversarialDebiasing class.

CVXPY

On MacOS, you may first have to install the Xcode Command Line Tools if you never have previously:

sh xcode-select --install

On Windows, you may need to download the Microsoft C++ Build Tools for Visual Studio 2019. See the CVXPY Install page for up-to-date instructions.

Then, try reinstalling via:

bash pip install 'aif360[OptimPreproc]'

CVXPY is only required for use with the aif360.algorithms.preprocessing.OptimPreproc class.

Using AIF360

The examples directory contains a diverse collection of jupyter notebooks that use AI Fairness 360 in various ways. Both tutorials and demos illustrate working code using AIF360. Tutorials provide additional discussion that walks the user through the various steps of the notebook. See the details about tutorials and demos here

Citing AIF360

A technical description of AI Fairness 360 is available in this paper. Below is the bibtex entry for this paper.

@misc{aif360-oct-2018, title = "{AI Fairness} 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias", author = {Rachel K. E. Bellamy and Kuntal Dey and Michael Hind and Samuel C. Hoffman and Stephanie Houde and Kalapriya Kannan and Pranay Lohia and Jacquelyn Martino and Sameep Mehta and Aleksandra Mojsilovic and Seema Nagar and Karthikeyan Natesan Ramamurthy and John Richards and Diptikalyan Saha and Prasanna Sattigeri and Moninder Singh and Kush R. Varshney and Yunfeng Zhang}, month = oct, year = {2018}, url = {https://arxiv.org/abs/1810.01943} }

AIF360 Videos

  • Introductory video to AI Fairness 360 by Kush Varshney, September 20, 2018 (32 mins)

Contributing

The development fork for Rich Subgroup Fairness (inprocessing/gerryfair_classifier.py) is here. Contributions are welcome and a list of potential contributions from the authors can be found here.

Owner

  • Name: Trusted-AI
  • Login: Trusted-AI
  • Kind: organization
  • Email: info@lfai.foundation
  • Location: IBM

This GitHub org hosts LF AI Foundation projects in the category of Trusted and Responsible AI.

GitHub Events

Total
  • Issues event: 5
  • Watch event: 209
  • Delete event: 1
  • Issue comment event: 13
  • Push event: 1
  • Pull request event: 11
  • Pull request review event: 20
  • Pull request review comment event: 27
  • Fork event: 52
  • Create event: 1
Last Year
  • Issues event: 5
  • Watch event: 209
  • Delete event: 1
  • Issue comment event: 13
  • Push event: 1
  • Pull request event: 11
  • Pull request review event: 20
  • Pull request review comment event: 27
  • Fork event: 52
  • Create event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 358
  • Total Committers: 72
  • Avg Commits per committer: 4.972
  • Development Distribution Score (DDS): 0.609
Past Year
  • Commits: 2
  • Committers: 2
  • Avg Commits per committer: 1.0
  • Development Distribution Score (DDS): 0.5
Top Committers
Name Email Commits
Samuel Hoffman h****c@g****m 140
Samuel Hoffman s****n@i****m 20
Animesh Singh s****n@u****m 20
PRASANNA SATTIGERI p****g@u****m 14
U-AzureAD\MichaelHind h****m@u****m 12
Victor Akinwande v****1@i****m 12
Prasanna Sattigeri p****g@P****l 12
milevavantuyl m****l@w****u 10
saishruthi s****n@i****m 8
nrkarthikeyan k****n@a****u 8
Michael Hind h****d@a****g 7
sohiniu s****y@g****m 6
Josue Rodriguez j****4@g****m 6
Gabriela de Queiroz g****z@g****m 4
prcvih 4****h 4
Ketan Barve b****k@u****m 3
dependabot[bot] 4****] 3
Moninder Singh m****r@u****m 3
mfeffer f****8@g****m 2
krvarshney k****h@g****m 2
Sreeja Gaddamidi 3****g 2
Romeo Kienzler r****r@g****m 2
James Budarz j****z@g****m 2
Ivette Sulca 5****a 2
Christian Kadner c****r@u****m 2
imgbot[bot] 3****] 2
Adebayo-Oshingbesan 1****n 2
Karthi Ramamurthy k****a@u****m 2
phantom-duck 3****k 2
Adrin Jalali a****i@g****m 2
and 42 more...
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 121
  • Total pull requests: 115
  • Average time to close issues: 5 months
  • Average time to close pull requests: about 2 months
  • Total issue authors: 54
  • Total pull request authors: 65
  • Average comments per issue: 1.74
  • Average comments per pull request: 1.25
  • Merged pull requests: 42
  • Bot issues: 0
  • Bot pull requests: 4
Past Year
  • Issues: 4
  • Pull requests: 9
  • Average time to close issues: N/A
  • Average time to close pull requests: about 6 hours
  • Issue authors: 4
  • Pull request authors: 5
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.11
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • anupamamurthi (29)
  • hoffmansc (10)
  • pradeepdev-1995 (9)
  • nrkarthikeyan (7)
  • mnagired (4)
  • SSaishruthi (4)
  • phantom-duck (3)
  • makoeppel (3)
  • MrMadium (2)
  • DataAnalysisA (2)
  • wangzh1998 (2)
  • jmarecek (2)
  • aroma123 (2)
  • HuangruiChu (2)
  • haas-christian (1)
Pull Request Authors
  • hoffmansc (36)
  • codell2 (6)
  • milevavantuyl (5)
  • dependabot[bot] (5)
  • anupamamurthi (4)
  • SurekhaSuresh (4)
  • phantom-duck (4)
  • asmitahajra (3)
  • ivesulca (2)
  • giordanoDaloisio (2)
  • ShorthillsAI (2)
  • sanspareilsmyn (2)
  • EktaBhaskar (2)
  • dharmod (2)
  • 13Shailja (2)
Top Labels
Issue Labels
good first issue (55) easy (24) infra (22) medium (18) datasets (16) alreadyassigned (11) epic (8) R (5) metrics (5) advanced (5) mitigation (5) help wanted (4) contribution welcome (4) general (3) bug (2)
Pull Request Labels
R (6) dependencies (5) python (5)

Packages

  • Total packages: 5
  • Total downloads:
    • pypi 21,698 last-month
  • Total dependent packages: 14
    (may contain duplicates)
  • Total dependent repositories: 100
    (may contain duplicates)
  • Total versions: 38
  • Total maintainers: 4
pypi.org: aif360

IBM AI Fairness 360

  • Versions: 12
  • Dependent Packages: 12
  • Dependent Repositories: 99
  • Downloads: 21,647 Last month
  • Docker Downloads: 0
Rankings
Dependent packages count: 1.3%
Dependent repos count: 1.5%
Stargazers count: 1.5%
Forks count: 1.6%
Average: 2.2%
Downloads: 3.5%
Docker downloads count: 4.1%
Maintainers (3)
Last synced: 6 months ago
proxy.golang.org: github.com/trusted-ai/aif360
  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.5%
Average: 5.7%
Dependent repos count: 5.8%
Last synced: 6 months ago
proxy.golang.org: github.com/Trusted-AI/AIF360
  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 0
Rankings
Dependent packages count: 5.5%
Average: 5.7%
Dependent repos count: 5.8%
Last synced: 6 months ago
pypi.org: aif360-fork2

IBM AI Fairness 360

  • Versions: 1
  • Dependent Packages: 2
  • Dependent Repositories: 0
  • Downloads: 51 Last month
Rankings
Stargazers count: 1.5%
Forks count: 1.7%
Dependent packages count: 7.0%
Average: 10.2%
Dependent repos count: 30.5%
Maintainers (1)
Last synced: 6 months ago
conda-forge.org: aif360

The AI Fairness 360 toolkit is an extensible open-source library containing techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle.

  • Versions: 3
  • Dependent Packages: 0
  • Dependent Repositories: 1
Rankings
Forks count: 6.2%
Stargazers count: 8.9%
Average: 22.6%
Dependent repos count: 24.0%
Dependent packages count: 51.4%
Last synced: 6 months ago