https://github.com/aida-ugent/fipr

The KL-Divergence between a Graph Model and its Fair I-Projection as a Fairness Regularizer (ECML-PKDD 2021).

https://github.com/aida-ugent/fipr

Science Score: 13.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.4%) to scientific vocabulary

Keywords

fairness graphs link-prediction projection
Last synced: 6 months ago · JSON representation

Repository

The KL-Divergence between a Graph Model and its Fair I-Projection as a Fairness Regularizer (ECML-PKDD 2021).

Basic Info
  • Host: GitHub
  • Owner: aida-ugent
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 219 KB
Statistics
  • Stars: 1
  • Watchers: 3
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
fairness graphs link-prediction projection
Created over 4 years ago · Last pushed about 4 years ago
Metadata Files
Readme License

README.md

Fair I-Projection Regularizer

Experiment code for the paper "The KL-Divergence between a Graph Model and its Fair I-Projection as a Fairness Regularizer", published at ECML-PKDD 2021.

Running the code

Running main_all.py executes the pipeline as configured by config.py. NOTE: this will overwrite any results in the results/ folder.

Alternatively, run main_simple.py for a simple example usage.

Setup

1) Download the desired datasets. See the data/ folder for their links.

2) Install the required packages as indicated by requirements.txt.

Fair I-Projection as a regularizer in your projects

The expected interface of each (link) Predictor is documented in predictor.py.

The distance from a model to its I-projection can be computed using the FairnessLoss PyTorch module. A forward call on this module consists of two steps. First, the fair I-projection (code in fip.py) is fit to the given model values h and data points x. Second, the gradient of the fair I-projection's loss is computed with respect to the model values h.

Two fairness notions are implemented in fairness_notions.py: 'DP' for Demographic Parity and 'EO' for Equalised Opportunity.

Citation

If you found our work useful in your own project, please cite our paper:

@inproceedings{buyl2021kl,
    author={Buyl, Maarten and De Bie, Tijl},
    title={The KL-Divergence Between a Graph Model and its Fair I-Projection as a Fairness Regularizer},
    booktitle={Machine Learning and Knowledge Discovery in Databases},
    year={2021},
    publisher={Springer International Publishing},
    pages={351--366}
}

Maintenance

Further development may be done in the future and bugs will be fixed. If you have any questions or concerns, feel free to report it here or send an email to 'maarten.buyl@ugent.be'.

Owner

  • Name: Ghent University Artificial Intelligence & Data Analytics Group
  • Login: aida-ugent
  • Kind: organization
  • Email: tijl.debie@ugent.be
  • Location: Ghent

GitHub Events

Total
Last Year

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • networkx ==2.5
  • numpy ==1.19.2
  • pandas ==1.1.5
  • scikit_learn ==0.24.1
  • scipy ==1.5.2
  • torch ==1.7.1
  • tqdm ==4.54.1