https://github.com/alan-turing-institute/fairness-monitoring
The project aims to enable a proactive fairness review approach in the early stages of AI development. It provides developer-oriented methods and tools to self-assess and monitor fairness.
https://github.com/alan-turing-institute/fairness-monitoring
Science Score: 23.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
○codemeta.json file
-
○.zenodo.json file
-
✓DOI references
Found 2 DOI reference(s) in README -
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.8%) to scientific vocabulary
Repository
The project aims to enable a proactive fairness review approach in the early stages of AI development. It provides developer-oriented methods and tools to self-assess and monitor fairness.
Basic Info
- Host: GitHub
- Owner: alan-turing-institute
- License: other
- Default Branch: main
- Size: 762 KB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
This repository has been archived and is no longer maintained. Feel free to fork the project if you'd like to continue development. Thank you to everyone who contributed and supported the project.
If you found this research useful, please consider citing:
Alpay Sabuncuoglu, Christopher Burr, and Carsten Maple. 2025. Justified Evidence Collection for Argument-based AI Fairness Assurance. In ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT ’25), June 23–26, 2025, Athens, Greece. ACM, New York, NY, USA, 16 pages. https://doi.org/10.1145/3715275.3732003Our work on developing trustworthy and safe AI continues in this repository: alan-turing-institute/AssurancePlatform
Proactive Monitoring of AI Fairness
The project aims to enable a proactive fairness review approach in the early stages of AI development. It provides developer-oriented methods and tools to self-assess and monitor fairness. In this repository, we present a high level overview and list project outputs.
Monitoring Fairness Metadata
Visit the tool dev page: https://github.com/asabuncuoglu13/faid
Throughout the development of ML models, developers and other stakeholders use various documentation formats to enhance reproducibility and communicate the details of artefacts with both internal and external stakeholders. Organizations use metadata recording formats, such as model cards, data cards, and algorithmic transparency frameworks to improve transparency across development and deployment workflows. We refer to these documentation tools as "transparency artefacts," which are designed to enhance clarity, accountability, and trust in ML systems. We are developing a tool for effectively using these transparency artefacts as justified evidence to verify the team took the required actions and created enough evidence for the claimed fairness arguments.
This "justified evidence" approach can enhance overall system fairness throughout the ML lifecycle by (1) providing insights into how features influence outcomes, (2) making model decisions understandable, (3) ensuring models meet fairness criteria, and (4) supporting informed decision-making. However, it's essential to recognise the limitations of current methods and to use them alongside other fairness-enhancing strategies, rather than as standalone solutions.
An overview of ML project lifecycle with CI/CD:

An overview of metadata flow throughout this lifecycle:

The tool, FAID (Fair AI Development), can support developers in documenting fairness-related information, which is crucial to ensure transparency and accountability in the development process. FAID’s logging framework supports comprehensive documentation by categorizing information into four key entities:
- Experiment-Level Fairness Log: Tracks high-level fairness considerations across the entire system, ensuring that all components align with the overarching fairness objectives.
- Model Log: Captures detailed information about model performance, including fairness metrics, bias assessments, and adjustments made during the development process. FAID's model metadata logging capability follows Google's model metadata format (See an example model card).
- Data Log: Documents the data sources, preprocessing steps, and any biases identified in the data, ensuring that data integrity is maintained throughout the lifecycle. (See an example datasheet).
- Risk Log: Records potential risks related to fairness and how they are mitigated, including any ethical concerns, compliance issues, and the steps taken to address them.
Fairness Evaluation of a Finance Use Case
We selected a set of evaluation and mitigation strategies that can inform the design of the tool. Data and algorithm related techniques are selected from Gallegos et al.’s [^1] comprehensive survey.
Visit FinBERT Sentiment Analysis Fairness Evaluation repository: https://github.com/asabuncuoglu13/faid-test-financial-sentiment-analysis
Understanding equitable interaction
We utilised ISO 9241 (Ergonomics of human-systems interactions), and Microsoft's Human-AI Interaction Guideline to assess how existing interactions can impact the overall fairness of the system.
See an example analysis here: https://asabuncuoglu13.github.io/equitable-ai-cookbook/usecases/finance/interaction.html
Tutorials and Learning Resources
- Equitable AI Cookbook: We started a new open-source community around Equitable AI Cookbook. You can find background information, experiment results, findings and discussion. See all the information and suggested reading to improve fairness in financial LLMs.
- Using LLMs on Local, HPC and Cloud: In collaboration with the REG team, useful scripts and guidelines to set up LLMs.
[^1]: I. O. Gallegos et al., ‘Bias and Fairness in Large Language Models: A Survey’. arXiv, Sep. 01, 2023. Accessed: Oct. 17, 2023. [Online]. Available: http://arxiv.org/abs/2309.00770
Funding Information
This project is one of the four projects funded in the Fairness Innovation Challenge, delivered by the Department for Science, Innovation and Technology (DSIT), and Innovate UK, in partnership with The EHRC and The ICO.
Owner
- Name: The Alan Turing Institute
- Login: alan-turing-institute
- Kind: organization
- Email: info@turing.ac.uk
- Website: https://turing.ac.uk
- Repositories: 477
- Profile: https://github.com/alan-turing-institute
The UK's national institute for data science and artificial intelligence.
GitHub Events
Total
- Watch event: 2
- Push event: 5
Last Year
- Watch event: 2
- Push event: 5
Issues and Pull Requests
Last synced: about 1 year ago
All Time
- Total issues: 0
- Total pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 0
- Total pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0