yaib

🧪Yet Another ICU Benchmark: a holistic framework for the standardization of clinical prediction model experiments. Provide custom datasets, cohorts, prediction tasks, endpoints, preprocessing, and models. Paper: https://arxiv.org/abs/2306.05109

https://github.com/rvandewater/yaib

Science Score: 77.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 2 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org, ieee.org
  • Committers with academic emails
    5 of 16 committers (31.3%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.2%) to scientific vocabulary

Keywords

amsterdamumcdb benchmark clinical-data clinical-ml deep-learning ehr eicu-crd framework hirid-dataset icu machine-learning mimic-iii mimic-iv patient-monitoring time-series

Keywords from Contributors

interactive mesh interpretability profiles sequences generic projection standardization optim embedded
Last synced: 4 months ago · JSON representation ·

Repository

🧪Yet Another ICU Benchmark: a holistic framework for the standardization of clinical prediction model experiments. Provide custom datasets, cohorts, prediction tasks, endpoints, preprocessing, and models. Paper: https://arxiv.org/abs/2306.05109

Basic Info
Statistics
  • Stars: 74
  • Watchers: 4
  • Forks: 21
  • Open Issues: 13
  • Releases: 4
Topics
amsterdamumcdb benchmark clinical-data clinical-ml deep-learning ehr eicu-crd framework hirid-dataset icu machine-learning mimic-iii mimic-iv patient-monitoring time-series
Created over 3 years ago · Last pushed 4 months ago
Metadata Files
Readme Contributing License Citation

README.md

YAIB logo

🧪 Yet Another ICU Benchmark

CI Black Platform arXiv PyPI version shields.io python pytorch lightning License

Yet another ICU benchmark (YAIB) provides a framework for doing clinical machine learning experiments on Intensive Care Unit (ICU) EHR data.

We support the following datasets out of the box:

| Dataset | MIMIC-III / IV | eICU-CRD | HiRID | AUMCdb | |-----------------------------|-----------------------------------------------------------------------------------------------------|-----------------------------------------------------|-----------------------------------------------------|--------------------------------------------------| | Admissions | 40k / 73k | 200k | 33k | 23k | | Version | v1.4 / v2.2 | v2.0 | v1.1.1 | v1.0.2 |
| Frequency (time-series) | 1 hour | 5 minutes | 2 / 5 minutes | up to 1 minute | | Originally published | 2015 / 2020 | 2017 | 2020 | 2019 | | Origin | USA | USA | Switzerland | Netherlands |

New datasets can also be added. We are currently working on a package to make this process as smooth as possible. The benchmark is designed for operating on preprocessed parquet files. <!-- We refer to PyICU (in development) or ricu package for generating these parquet files for particular cohorts and endpoints. -->

We provide five common tasks for clinical prediction by default:

| No | Task | Frequency | Type | |-----|---------------------------|---------------------------|-----------------------| | 1 | ICU Mortality | Once per Stay (after 24H) | Binary Classification | | 2 | Acute Kidney Injury (AKI) | Hourly (within 6H) | Binary Classification | | 3 | Sepsis | Hourly (within 6H) | Binary Classification | | 4 | Kidney Function(KF) | Once per stay | Regression | | 5 | Length of Stay (LoS) | Hourly (within 7D) | Regression |

New tasks can be easily added. To get started right away, we include the eICU and MIMIC-III demo datasets in our repository.

The following repositories may be relevant as well:

For all YAIB-related repositories, please see: https://github.com/stars/rvandewater/lists/yaib.

📄Paper

To reproduce the benchmarks in our paper, we refer to the ML reproducibility document. If you use this code in your research, please cite the following publication:

``` @inproceedings{vandewaterYetAnotherICUBenchmark2024, title = {Yet Another ICU Benchmark: A Flexible Multi-Center Framework for Clinical ML}, shorttitle = {Yet Another ICU Benchmark}, booktitle = {The Twelfth International Conference on Learning Representations}, author = {van de Water, Robin and Schmidt, Hendrik Nils Aurel and Elbers, Paul and Thoral, Patrick and Arnrich, Bert and Rockenschaub, Patrick}, year = {2024}, month = oct, urldate = {2024-02-19}, langid = {english}, }

```

This paper can also be found on arxiv 2306.05109

💿Installation

YAIB is currently ideally installed from source, however we also offer it an early PyPi release.

Installation from source

First, we clone this repository using git:

git clone https://github.com/rvandewater/YAIB.git

Please note the branch. The newest features and fixes are available at the development branch:

git checkout development

YAIB can be installed using a conda environment (preferred) or pip. Below are the three CLI commands to install YAIB using conda.

The first command will install an environment based on Python 3.10.

conda env update -f environment.yml

Use environment.yml on x86 hardware. Please note that this installs Pytorch as well.

For mps, one needs to comment out pytorch-cuda, see the PyTorch install guide.

We then activate the environment and install a package called icu-benchmarks, after which YAIB should be operational.

conda activate yaib pip install -e .

//: # ()

After installation, please check if your Pytorch version works with CUDA (in case available) to ensure the best performance. YAIB will automatically list available processors at initialization in its log files.

👩‍💻Usage

Please refer to our wiki for detailed information on how to use YAIB.

Quickstart 🚀 (demo data)

The authors of MIMIC-III and eICU have made a small demo dataset available to demonstrate their use. They can be found on Physionet: MIMIC-III Clinical Database Demo and eICU Collaborative Research Database Demo. These datasets are published under the Open Data Commons Open Database License v1.0 and can be used without credentialing procedure. We have created demo cohorts processed solely from these datasets for each of our currently supported task endpoints. To the best of our knowledge, this complies with the license and the respective dataset author's instructions. Usage of the task cohorts and the dataset is only permitted with the above license. We strongly recommend completing a human subject research training to ensure you properly handle human subject research data.

In the folder demo_data we provide processed publicly available demo datasets from eICU and MIMIC with the necessary labels for Mortality at 24h,Sepsis, Akute Kidney Injury, Kidney Function, and Length of Stay.

If you do not yet have access to the ICU datasets, you can run the following command to train models for the included demo cohorts:

wandb sweep --verbose experiments/demo_benchmark_classification.yml wandb sweep --verbose experiments/demo_benchmark_regression.yml

train wandb agent <sweep_id>

Tip: You can choose to run each of the configurations on a SLURM cluster instance by wandb agent --count 1 <sweep_id>

Note: You will need to have a wandb account and be logged in to run the above commands.

Getting the datasets

HiRID, eICU, and MIMIC IV can be accessed through PhysioNet. A guide to this process can be found here. AUMCdb can be accessed through a separate access procedure. We do not have involvement in the access procedure and can not answer to any requests for data access.

Cohort creation

Since the datasets were created independently of each other, they do not share the same data structure or data identifiers. In order to make them interoperable, use the preprocessing utilities provided by the ricu package. Ricu pre-defines a large number of clinical concepts and how to load them from a given dataset, providing a common interface to the data, that is used in this benchmark. Please refer to our cohort definition code for generating the cohorts using our python interface for ricu. After this, you can run the benchmark once you have gained access to the datasets.

👟 Running YAIB

Preprocessing and Training

The following command will run training and evaluation on the MIMIC demo dataset for (Binary) mortality prediction at 24h with the LGBMClassifier. Child samples are reduced due to the small amount of training data. We load available cache and, if available, load existing cache files.

icu-benchmarks \ -d demo_data/mortality24/mimic_demo \ -n mimic_demo \ -t BinaryClassification \ -tn Mortality24 \ -m LGBMClassifier \ -hp LGBMClassifier.min_child_samples=10 \ --generate_cache \ --load_cache \ --seed 2222 \ -l ../yaib_logs/ \ --tune

For a list of available flags, run icu-benchmarks train -h.

Run with PYTORCH_ENABLE_MPS_FALLBACK=1 on Macs with Metal Performance Shaders.

For Windows based systems, the next line character (\) needs to be replaced by (^) (Command Prompt) or (`) (Powershell) respectively.

Alternatively, the easiest method to train all the models in the paper is to run these commands from the directory root:

train wandb sweep --verbose experiments/benchmark_classification.yml wandb sweep --verbose experiments/benchmark_regression.yml

This will create two hyperparameter sweeps for WandB for the classification and regression tasks. This configuration will train all the models in the paper. You can then run the following command to train the models:

train wandb agent <sweep_id>

Tip: You can choose to run each of the configurations on a SLURM cluster instance by wandb agent --count 1 <sweep_id>

Note: You will need to have a wandb account and be logged in to run the above commands.

Evaluate or Finetune

It is possible to evaluate a model trained on another dataset and no additional training is done. In this case, the source dataset is the demo data from MIMIC and the target is the eICU demo:

icu-benchmarks \ --eval \ -d demo_data/mortality24/eicu_demo \ -n eicu_demo \ -t BinaryClassification \ -tn Mortality24 \ -m LGBMClassifier \ --generate_cache \ --load_cache \ -s 2222 \ -l ../yaib_logs \ -sn mimic \ --source-dir ../yaib_logs/mimic_demo/Mortality24/LGBMClassifier/2022-12-12T15-24-46/repetition_0/fold_0

A similar syntax is used for finetuning, where a model is loaded and then retrained. To run finetuning, replace --eval with -ft.

Models

We provide several existing machine learning models that are commonly used for multivariate time-series data. pytorch is used for the deep learning models, lightgbm for the boosted tree approaches, and sklearn for other classical machine learning models. The benchmark provides (among others) the following built-in models:

🛠️ Development

To adapt YAIB to your own use case, you can use the development information page as a reference. We appreciate contributions to the project. Please read the contribution guidelines before submitting a pull request.

Acknowledgements

This project has been developed partially under the funding of “Gemeinsamer Bundesausschuss (G-BA) Innovationsausschuss” in the framework of “CASSANDRA - Clinical ASSist AND aleRt Algorithms”. (project number 01VSF20015). We would like to acknowledge the work of Alisher Turubayev, Anna Shopova, Fabian Lange, Mahmut Kamalak, Paul Mattes, and Victoria Ayvasky for adding Pytorch Lightning, Weights and Biases compatibility, and several optional imputation methods to a later version of the benchmark repository.

We do not own any of the datasets used in this benchmark. This project uses heavily adapted components of the HiRID benchmark. We thank the authors for providing this codebase and encourage further development to benefit the scientific community. The demo datasets have been released under an Open Data Commons Open Database License (ODbL).

License

This source code is released under the MIT license, included here. We do not own any of the datasets used or included in this repository.

Owner

  • Name: Robin van de Water
  • Login: rvandewater
  • Kind: user
  • Location: Berlin
  • Company: Hasso Plattner Institute

PhD student in Medical Event Prediction at Hasso Plattner Institute in collaboration with the Charité hospital (Berlin)

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: Yet Another ICU Benchmark (YAIB)
message: >-
  If you use this software, please cite the research paper and optionally the software itself using the
  metadata from this file.
type: software
authors:
  - given-names: Robin
    family-names: van de Water
    email: Robin.VandeWater@hpi.de
    affiliation: Hasso Plattner Institute
    orcid: 'https://orcid.org/0000-0002-2895-4872'
  - given-names: Hendrik
    family-names: Schmidt
    email: Hendrik.Schmidt@student.hpi.uni-potsdam.de
    affiliation: Hasso Plattner Institute
  - given-names: Patrick
    family-names: Rockenschaub
    email: patrick.rockenschaub@gmail.com
    affiliation: Fraunhofer IKS
    orcid: 'https://orcid.org/0000-0002-6499-7933'
identifiers:
  - type: doi
    value: 10.48550/arXiv.2306.05109
    description: Research paper
repository-code: 'https://github.com/rvandewater/YAIB'
url: 'https://github.com/rvandewater/YAIB/wiki'
keywords:
  - machine-learning
  - ehr
  - icu
  - mimic-iii
  - eicu-crd
  - clinical-data
  - mimic-iv
  - hirid-dataset
  - amsterdamumcdb
license: MIT

GitHub Events

Total
  • Create event: 4
  • Release event: 1
  • Issues event: 8
  • Watch event: 22
  • Delete event: 5
  • Issue comment event: 14
  • Push event: 80
  • Gollum event: 1
  • Pull request review event: 50
  • Pull request review comment event: 121
  • Pull request event: 15
  • Fork event: 10
Last Year
  • Create event: 4
  • Release event: 1
  • Issues event: 8
  • Watch event: 22
  • Delete event: 5
  • Issue comment event: 14
  • Push event: 80
  • Gollum event: 1
  • Pull request review event: 50
  • Pull request review comment event: 121
  • Pull request event: 15
  • Fork event: 10

Committers

Last synced: 8 months ago

All Time
  • Total Commits: 1,243
  • Total Committers: 16
  • Avg Commits per committer: 77.688
  • Development Distribution Score (DDS): 0.626
Past Year
  • Commits: 110
  • Committers: 2
  • Avg Commits per committer: 55.0
  • Development Distribution Score (DDS): 0.064
Top Committers
Name Email Commits
rvandewater r****r@g****m 465
Hendrik Schmidt h****s@g****m 447
paul.mattes p****s@s****e 133
Hugo Yeche h****e@g****m 49
prockenschaub r****k@g****m 45
Matthias Hüser m****r@i****h 28
fabian d****2@i****m 23
Margarita Kuznetsova m****a@i****h 13
xinruilyu x****u@o****m 11
anna.shopova a****a@s****e 7
dependabot[bot] 4****] 7
Alisher Turubayev a****v@g****m 6
Victoria V****y@s****e 4
Marc Zimmermann m****n@i****h 3
Malte Londschien m****e@l****e 1
youssef mecky y****y@p****n 1

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 78
  • Total pull requests: 104
  • Average time to close issues: about 2 months
  • Average time to close pull requests: 5 days
  • Total issue authors: 10
  • Total pull request authors: 8
  • Average comments per issue: 0.97
  • Average comments per pull request: 0.96
  • Merged pull requests: 93
  • Bot issues: 0
  • Bot pull requests: 12
Past Year
  • Issues: 7
  • Pull requests: 20
  • Average time to close issues: 4 days
  • Average time to close pull requests: about 18 hours
  • Issue authors: 4
  • Pull request authors: 3
  • Average comments per issue: 1.29
  • Average comments per pull request: 1.0
  • Merged pull requests: 15
  • Bot issues: 0
  • Bot pull requests: 6
Top Authors
Issue Authors
  • HendrikSchmidt (37)
  • rvandewater (24)
  • prockenschaub (10)
  • youssefmecky96 (1)
  • mlondschien (1)
  • njtp111 (1)
  • leoleoasd (1)
  • Addison-Weatherhead (1)
  • mahmoudibrahim98 (1)
  • Daphne-yjh (1)
Pull Request Authors
  • rvandewater (55)
  • HendrikSchmidt (20)
  • dependabot[bot] (12)
  • prockenschaub (10)
  • Snagnar (3)
  • unartig (2)
  • mlondschien (1)
  • youssefmecky96 (1)
Top Labels
Issue Labels
enhancement (14) bug (4) documentation (4)
Pull Request Labels
dependencies (12) python (2)

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 17 last-month
  • Total dependent packages: 0
  • Total dependent repositories: 0
  • Total versions: 2
  • Total maintainers: 1
pypi.org: yaib

Yet Another ICU Benchmark is a holistic framework for the automation of the development of clinical prediction models on ICU data. Users can create custom datasets, cohorts, prediction tasks, endpoints, and models.

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 17 Last month
Rankings
Dependent packages count: 7.5%
Stargazers count: 16.1%
Forks count: 19.3%
Average: 36.8%
Dependent repos count: 69.6%
Downloads: 71.4%
Maintainers (1)
Last synced: 4 months ago

Dependencies

.github/workflows/ci.yml actions
  • actions/checkout v3 composite
  • conda-incubator/setup-miniconda v2 composite
setup.py pypi
environment.yml conda
  • black 23.3.0.*
  • coverage 7.2.3.*
  • einops 0.6.1.*
  • flake8 5.0.4.*
  • gin-config 0.5.0.*
  • hydra-core 1.3.*
  • ignite 0.4.11.*
  • lightgbm 3.3.5.*
  • matplotlib 3.7.1.*
  • numpy 1.24.3.*
  • pandas 2.0.0.*
  • pip 23.1.*
  • pyarrow 11.0.0.*
  • pytest 7.3.1.*
  • python 3.10.*
  • pytorch 2.0.1.*
  • pytorch-cuda 11.8.*
  • pytorch-lightning 2.0.3.*
  • scikit-learn 1.2.2.*
  • tensorboard 2.12.2.*
  • torchmetrics 1.0.3.*
  • tqdm 4.64.1.*
  • wandb 0.15.4.*