Scientific Software
Updated 6 months ago

imodels — Peer-reviewed • Rank 21.0 • Science 100%

imodels: a python package for fitting interpretable models - Published in JOSS (2021)

Scientific Software
Updated 6 months ago

modelStudio — Peer-reviewed • Rank 13.5 • Science 95%

modelStudio: Interactive Studio with Explanations for ML Predictive Models - Published in JOSS (2019)

Scientific Software
Updated 6 months ago

shapr — Peer-reviewed • Rank 15.0 • Science 93%

shapr: An R-package for explaining machine learning models with dependence-aware Shapley values - Published in JOSS (2019)

Artificial Intelligence and Machine Learning Earth and Environmental Sciences (40%) Economics (40%)
Scientific Software · Peer-reviewed
Scientific Software
Updated 6 months ago

SIRUS.jl — Peer-reviewed • Rank 8.0 • Science 98%

SIRUS.jl: Interpretable Machine Learning via Rule Extraction - Published in JOSS (2023)

Scientific Software · Peer-reviewed
Scientific Software
Updated 6 months ago

FAT Forensics — Peer-reviewed • Rank 9.9 • Science 95%

FAT Forensics: A Python Toolbox for Implementing and Deploying Fairness, Accountability and Transparency Algorithms in Predictive Systems - Published in JOSS (2020)

Scientific Software
Updated 6 months ago

pyCeterisParibus — Peer-reviewed • Rank 9.4 • Science 95%

pyCeterisParibus: explaining Machine Learning models with Ceteris Paribus Profiles in Python - Published in JOSS (2019)

Computer Science
Scientific Software · Peer-reviewed
Scientific Software
Updated 6 months ago

TSInterpret — Peer-reviewed • Rank 7.2 • Science 93%

TSInterpret: A Python Package for the Interpretability of Time Series Classification - Published in JOSS (2023)

Updated 6 months ago

quantus • Rank 16.1 • Science 77%

Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations

Updated 6 months ago

CounterfactualExplanations • Rank 9.3 • Science 77%

A package for Counterfactual Explanations and Algorithmic Recourse in Julia.

Updated 6 months ago

sai • Rank 1.1 • Science 77%

Using explainable to identify regional climate signals to stratospheric aerosol injection

Updated 6 months ago

transformers-interpret • Rank 19.3 • Science 54%

Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.

Updated 6 months ago

mlm-bias • Rank 4.9 • Science 67%

Measuring Biases in Masked Language Models for PyTorch Transformers. Support for multiple social biases and evaluation measures.

Updated 6 months ago

grad-cam • Rank 23.6 • Science 46%

Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.

Scientific Software
Updated 6 months ago

GSAreport — Peer-reviewed • Rank 3.8 • Science 59%

GSAreport: Easy to Use Global Sensitivity Reporting - Published in JOSS (2022)

Mathematics (40%)
Scientific Software · Peer-reviewed
Updated 6 months ago

predictgmstrate • Rank 2.1 • Science 59%

Using a neural network to predict changes in the rate of global mean surface temperature warming

Updated 6 months ago

modelbiasesann • Rank 2.1 • Science 59%

Investigation of model biases in historical internal variability using explainable AI

Updated 6 months ago

phepy • Rank 3.0 • Science 57%

Intuitive evaluation of out-of-distribution detectors using simple toy examples.

Updated 6 months ago

gam • Rank 11.8 • Science 36%

GAM (Global Attribution Mapping) explains the landscape of neural network predictions across subpopulations

Updated 6 months ago

https://github.com/ammarlodhi255/fine-grained-approach-to-wrist-pathology-recognition • Rank 0.0 • Science 36%

This repository contains the official code for the paper "Learning from the Few: Fine-grained Approach to Wrist Pathology Recognition on a Limited Dataset".

Updated 6 months ago

https://github.com/andreartelt/ceml • Rank 9.4 • Science 23%

CEML - Counterfactuals for Explaining Machine Learning models - A Python toolbox

Updated 6 months ago

cpath • Rank 5.7 • Science 23%

Explaining black-box models through counterfactual paths and conditional permutations

Updated 6 months ago

neurox • Rank 10.0 • Science 10%

A Python library that encapsulates various methods for neuron interpretation and analysis in Deep NLP models.

Updated 6 months ago

https://github.com/agamiko/gebi • Rank 2.9 • Science 10%

GEBI: Global Explanations for Bias Identification. Open source code for discovering bias in data with skin lesion dataset

Updated 6 months ago

azimuth • Science 67%

Helping AI practitioners better understand their datasets and models in text classification. From ServiceNow.

Updated 6 months ago

crm • Science 49%

Compositional Relational Machines (CRMs): Constructing deep neural networks that are logically explainable by design

Updated 6 months ago

https://github.com/birkhoffg/rocoursenet • Science 36%

This is the official repository of the paper "RoCourseNet: Distributionally Robust Training of a Prediction Aware Recourse Model".

Updated 6 months ago

tabsplanation • Science 18%

Experiments on counterfactual explanations for neural networks, based on the [latent shift method](https://arxiv.org/abs/2102.09475)

Updated 6 months ago

araucana-xai • Science 67%

Tree-based local explanations of machine learning model predictions

Updated 6 months ago

explainable-crack-tip-detection • Science 67%

Explainable ML for fatigue crack tip detection - Implementation

Updated 6 months ago

https://github.com/pietrobarbiero/logic_explained_networks • Science 49%

Logic Explained Networks is a python repository implementing explainable-by-design deep learning models.

Updated 6 months ago

finer-cam • Science 62%

This is an official implementation for Finer-CAM: Spotting the Difference Reveals Finer Details for Visual Explanation. [CVPR'25]

Updated 6 months ago

explainable-cell-graphs • Science 54%

Code and experiments of the Explainable Cell Graphs (xCG) paper

Updated 6 months ago

arrakis-mi • Science 44%

Arrakis is a library to conduct, track and visualize mechanistic interpretability experiments.

Updated 6 months ago

localice • Science 10%

Local Individual Conditional Expectation (localICE) is a local explanation approach from the field of eXplainable Artificial Intelligence (XAI)

Updated 6 months ago

explainpolysvm • Science 44%

ExplainPolySVM is a python package to provide interpretation and explainability to Support Vector Machine models trained with polynomial kernels. The package can be used with any SVM model as long as the components of the model can be extracted.

Updated 6 months ago

hcompnet • Science 36%

Code repository for HComP-Net (ICLR'25)

Updated 6 months ago

contrastiveexplanation • Science 54%

Contrastive Explanation (Foil Trees), developed at TNO/Utrecht University

Updated 6 months ago

fgclustering • Science 65%

Explainability for Random Forest Models.

Updated 6 months ago

intr • Science 75%

This is an official implementation for [ICLR'24] INTR: Interpretable Transformer for Fine-grained Image Classification.