https://github.com/berenslab/disentangling-retinal-images

This repository contains the code for the paper "Disentangling representations of retinal images with generative models".

https://github.com/berenslab/disentangling-retinal-images

Science Score: 49.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, sciencedirect.com, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.8%) to scientific vocabulary

Keywords

causality deep-learning disentanglement-learning generative-model retinal-fundus-images spurious-correlations
Last synced: 5 months ago · JSON representation

Repository

This repository contains the code for the paper "Disentangling representations of retinal images with generative models".

Basic Info
Statistics
  • Stars: 1
  • Watchers: 2
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
causality deep-learning disentanglement-learning generative-model retinal-fundus-images spurious-correlations
Created over 1 year ago · Last pushed 8 months ago
Metadata Files
Readme

README.md

Disentangling representations of retinal images with generative models

This repository contains the code to reproduce the results from the paper Disentangling representations of retinal images with generative models.

We present a novel population model for retinal fundus images that effectively disentangles patient attributes from camera effects with a a disentanglement loss based on distance correlation. The resulting models enable controllable and highly realistic fundus image generation.

Installation

Set up a python environment with a python version 3.9. Then, download the repository, activate the environment and install all other dependencies with cd disentangling-retinal-images pip install --editable .

This installs the code in src as an editable package and all the dependencies in requirements.txt.

Organization of the repo

  • configs: Configuration files for all experiments.
  • scripts: Bash scripts for model training, testing and evaluation.
  • src: Main source code to run the experiments.
    • dataset: Pytorch EyePACS dataset.
    • generative_model: Pytorch lightning stylegan module.
    • evaluation: Model evaluation with kNN classifiers, image quality metrics (fid-score), and swapped subspace classification.
  • train.py: Model training script.
  • test.py: Model testing script.
  • predict.py: Image embedding prediction script (model inference).

EyePACS dataset

The EyePACS dataset can be accessed upon request: https://www.eyepacs.com/. Our dataset parser can be checked out in dataset/eyepacs_parsing.

For our EyePACS pytorch dataset you will need a factorized metadata (with a categorical columns mapping) and a diretory to your dataset splits. Therefore, we also share our scripts to factorize and to split the dataset. In dataset/eyepacs_parsing we share our categorical columns mapping as a reference.

Moreover, before factorizing and splitting the dataset, we pre-processed the retinal fundus images with: https://github.com/berenslab/funduscirclecropping.

Usage

Model training

For model training, run the following command python src/train.py -c ./configs/configs_train/test.yaml Here we run the model with a test training configuration file. All model configuration files for reproducing the models of the paper can be found here.

Model evaluation

To test the model run the script python python src/test.py -d path/to/experiment/folder -c ./configs/configs_test/file.yaml

To predict the learned image embeddings for all data set splits (train, val, test), execute the bash script bash sh scripts/predict_embeddings.sh path/to/experiment/folder configs/configs_predict/default.yaml with the arguments $1: path to model experiment folder, $2: configuration file for predict.py. Hint: you need to adjust PROJECT_DIR and python_path.

kNN classifier performance

Evaluate the kNN classifier performance with the predicted embeddings for EyePACS: bash sh scripts/knn_eval_embeddings.sh path/to/experiment/folder 4 12 16 with the arguments $1: path to model experiment folder,$2-$end: subspace dimensions. Here, we chose the subspace dimensions of [age, camera, identity] = [4, 12, 16]. Hint: you need to adjust PROJECT_DIR and python_path.

Image quality

Compute image quality metrics (fid, kid): python python src/evaluation/eval_image_quality.py -d path/to/experiment/folder -c ./configs/configs_image_quality/default.yaml

Swapped subspaces

Train subspace classifiers on age subspaces: python python src/evaluation/swapped_subspaces/train_age_classification.py -d path/to/experiment/folder -c configs/configs_swapped_subspaces/train_age_classification.yaml Test trained classification model on swapped age subspaces: python python src/evaluation/swapped_subspaces/test_age_classification.py -d path/to/classification/model -c configs/configs_swapped_subspaces/test_age_classification.yaml

Tips to train with a custom dataset

The stylegan model interface is dataset-agnostic. Therefore, if you want to train our model on a different dataset, start replacing our EyePACS dataset with your dataset and return an identically structured dictionary in the __getitem__ function.

Model weights

The model weight of our trained generative models from the paper can be found on zenodo.

Credits

We used a stylegan2-ada pytorch lightning implementation as a starting point for our experiments: https://github.com/nihalsid/stylegan2-ada-lightning. From this repository we extended the gan architecture with gan inversion and independent subspace learning (subspace classifiers and distance correlation loss).

Cite

If you find our code or paper useful, please consider citing this work. bibtex @article{mueller2025disentangling, title = {Disentangling representations of retinal images with generative models}, author = {M\"uller, Sarah and Koch, Lisa M. and Lensch, Hendrik, P. A. and Berens, Philipp}, journal = {Medical Image Analysis}, volume = {105}, pages = {103628}, year = {2025}, issn = {1361-8415}, doi = {https://doi.org/10.1016/j.media.2025.103628}, url = {https://www.sciencedirect.com/science/article/pii/S1361841525001756}, }

Owner

  • Name: Berens Lab @ University of Tübingen
  • Login: berenslab
  • Kind: organization
  • Email: philipp.berens@uni-tuebingen.de
  • Location: Tübingen, Germany

Department of Data Science at the Hertie Institute for AI in Brain Health, University of Tübingen

GitHub Events

Total
  • Watch event: 1
  • Push event: 6
  • Create event: 1
Last Year
  • Watch event: 1
  • Push event: 6
  • Create event: 1

Dependencies

requirements.txt pypi
  • Pillow ==10.0.0
  • PyYAML ==6.0.1
  • black ==23.7.0
  • clean-fid ==0.1.35
  • isort ==5.12.0
  • mypy ==1.5.0
  • numpy ==1.25.2
  • omegaconf ==2.3.0
  • openTSNE ==1.0.0
  • pandas ==2.0.3
  • pip ==24.2
  • pytorch-lightning ==2.0.6
  • scikit-learn ==1.3.0
  • scipy ==1.11.1
  • seaborn ==0.12.2
  • tensorboard ==2.14.0
  • torch ==2.0.1
  • torch-ema ==0.3
  • torchmetrics ==1.0.3
  • torchvision ==0.15.2
  • tqdm ==4.66.1
setup.py pypi