rrwnet

Official repository of the paper "RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification", published in Expert Systems with Applications (Dec 2024).

https://github.com/j-morano/rrwnet

Science Score: 67.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 14 DOI reference(s) in README
  • Academic publication links
    Links to: arxiv.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.1%) to scientific vocabulary

Keywords

artery-vein classification deep-learning fundus medical-image-analysis medical-imaging ophthalmology pytorch segmentation
Last synced: 4 months ago · JSON representation ·

Repository

Official repository of the paper "RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification", published in Expert Systems with Applications (Dec 2024).

Basic Info
Statistics
  • Stars: 35
  • Watchers: 2
  • Forks: 2
  • Open Issues: 0
  • Releases: 2
Topics
artery-vein classification deep-learning fundus medical-image-analysis medical-imaging ophthalmology pytorch segmentation
Created about 2 years ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

arXiv DOI HF License: MIT

RRWNet

UsageWeightsTraining and EvaluationarXivESwACitation

This is the official repository of the paper "RRWNet: Recursive Refinement Network for Effective Retinal Artery/Vein Segmentation and Classification", by José Morano, Guilherme Aresta, and Hrvoje Bogunović, published in Expert Systems with Applications (2024).

Highlights

  • Human-level, state-of-the-art performance on retinal artery/vein segmentation and classification.
    • Evaluated on three public datasets: RITE, LES-AV, and HRF.
  • Novel recursive framework for solving manifest errors in semantic segmentation maps.
    • First framework to combine module stacking and recursive refinement approaches.
  • Stand-alone recursive refinement module for post-processing artery/vein segmentation maps.

Overview

Graphical_abstract

Previous work

This approach builds on our previous work presented in the paper "Simultaneous segmentation and classification of the retinal arteries and veins from color fundus images", published in Artificial Intelligence in Medicine (2021).

Basic usage

The models can be easily used with the model.py code and loading the weights with Hugging Face 🤗. The only requirement is to have the torch, huggingface_hub, and safetensors packages installed.

```python from huggingface_hub import PyTorchModelHubMixin from model import RRWNet as RRWNetModel

class RRWNet(RRWNetModel, PyTorchModelHubMixin): def init(self, inputch=3, outputch=3, basech=64, iterations=5): super().init(inputch, outputch, basech, iterations)

model = RRWNet.from_pretrained("j-morano/rrwnet-rite")

or rrwnet-hrf for the HRF dataset

```

Weights and predictions

The weights of the proposed RRWNet model as well as the predictions for the different datasets can be found at the following links:

The model trained on the RITE dataset was trained using the original image resolution, while the model trained on HRF was trained using images resized to a width of 1024 pixels. The weights for the RITE dataset are named rrwnet_RITE_1.pth, while the weights for the HRF dataset are named rrwnet_HRF_0.pth. Please note that the size of the images used for training is important when using the weights for predictions.

Data format

Our code always expects the images to be RGB images with pixel values in the range [0, 255] and the masks to be RGB images with the following segmentation maps in each channel:

  • 🔴 Red: Arteries
  • 🟢 Green: Veins
  • 🔵 Blue: Vessels (union of arteries and veins)

The masks should be binary images with pixel values in the range [0, 255]. The predictions will be saved in the same format as the masks.

Setting up the environment

For the paper, the code was run using Python 3.10.10, and was also tested for Python 3.12.8 afterwards. In general, with the specified requirements, it is expected to work with any Python version <3.13. Just make sure to install the required packages listed in requirements.txt. If you want to use the exact Python version of the paper, it can be easily installed using pyenv as shown in the next collapsed section. Otherwise, you can skip to the Requirements section.

Installing Python 3.10.10 using pyenv ### Python 3.10.10 (`pyenv`) > **📌 IMPORTANT**: The following steps are only necessary if you want to install Python 3.10.10 using `pyenv`. Install `pyenv`. ```sh curl https://pyenv.run | bash ``` Install `clang`. _E.g._: ```sh sudo dnf install clang ``` Install Python version 3.10.10. ```sh CC=clang pyenv install -v 3.10.10 ``` Create and activate Python environment. ```sh ~/.pyenv/versions/3.10.10/bin/python3 -m venv venv/ source venv/bin/activate # bash . venv/bin/activate.fish # fish ``` Update `pip` if necessary. ```sh pip install --upgrade pip ``` > **💡 TIP**: For installing Python 3.12.8, just replace `3.10.10` with `3.12.8` in the commands above.

Requirements

Create and activate Python environment. sh python -m venv venv/ source venv/bin/activate # bash . venv/bin/activate.fish # fish

Install requirements using requirements.txt.

sh pip3 install -r requirements.txt

Preprocessing

You can preprocess the images offline using the preprocessing.py script. The script will enhance the images and masks and save them in the specified directory. This preprocessing step is necessary to use our trained models or to reproduce the results of the paper. However, it is still possible to train the models without preprocessing the images or using your offline preprocessing method.

bash python3 preprocessing.py --images-path data/images/ --masks-path data/masks/ --save-path data/enhanced

Get predictions

To get predictions using the provided weights, run the get_predictions.py script. The script will save the predictions in the specified directory. If the images were not previously preprocessed, you can use the --preprocess flag to preprocess the images on the fly.

bash python3 get_predictions.py --weights rrwnet_RITE_1.pth --images-path data/images/ --masks-path data/masks/ --save-path predictions/ --preprocess

Refine existing predictions

You can refine existing predictions (e.g., from a different model) using the same get_predictions.py script. The script will save the refined predictions in the specified directory. Just make sure to provide the path to the predictions and the weights to be used for the refinement. Also, do not forget to use the --refine flag and do not use the --preprocess flag.

bash python3 get_predictions.py --weights rrwnet_RITE_refinement.pth --images-path data/U-Net_predictions/ --masks-path data/masks/ --save-path refined_predictions/ --refine

Training and Evaluation

All training code can be found in the train/ directory. The training script is train.py, and the configuration file, with all the hyperparameters and command line arguments, is config.py. Please follow the instructions in train/README.md to train the model. The train/ directory also contains the code to get the predictions of the model on the test set, which are then used for the evaluation.

All evaluation code can be found in the eval/ directory. Please follow the instructions in eval/README.md.

Contact

If you have any questions or problems with the code or the paper, please do not hesitate to open an issue in this repository (preferred) or contact me at jose.moranosanchez@meduniwien.ac.at.

Citation

If you use this code, the weights, the preprocessed data, or the predictions in your research, we would greatly appreciate it if you give a star to the repo and cite our work:

@article{morano2024rrwnet, title = {{RRWNet}: Recursive Refinement Network for effective retinal artery/vein segmentation and classification}, author={Morano, Jos{\'e} and Aresta, Guilherme and Bogunovi{\'c}, Hrvoje}, journal = {Expert Systems with Applications}, volume = {256}, pages = {124970}, year = {2024}, issn = {0957-4174}, doi = {10.1016/j.eswa.2024.124970}, }

Also, if you use any of the public datasets used in this work, please cite the corresponding papers:

  • RITE
    • Images: Staal, Joes, et al. "Ridge-based vessel segmentation in color images of the retina." IEEE transactions on medical imaging 23.4 (2004): 501-509.
    • Annotations: Hu, Qiao, Michael D. Abràmoff, and Mona K. Garvin. "Automated separation of binary overlapping trees in low-contrast color retinal images." Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013: 16th International Conference, Nagoya, Japan, September 22-26, 2013, Proceedings, Part II 16. Springer Berlin Heidelberg, 2013.
  • LES-AV
    • Images and annotations: Orlando, José Ignacio, et al. "Towards a glaucoma risk index based on simulated hemodynamics from fundus images." Medical Image Computing and Computer Assisted Intervention–MICCAI 2018: 21st International Conference, Granada, Spain, September 16-20, 2018, Proceedings, Part II 11. Springer International Publishing, 2018.
  • HRF
    • Images: Budai, Attila, et al. "Robust vessel segmentation in fundus images." International journal of biomedical imaging 2013.1 (2013): 154860.
    • Annotations: Chen, Wenting, et al. "TW-GAN: Topology and width aware GAN for retinal artery/vein classification." Medical Image Analysis 77 (2022): 102340.

Owner

  • Name: José Morano
  • Login: j-morano
  • Kind: user
  • Location: Vienna, Austria
  • Company: Medical University of Vienna

PhD student in medical image computing.

Citation (CITATION.bib)

@article{morano2024rrwnet,
    title = {{RRWNet}: Recursive Refinement Network for effective retinal artery/vein segmentation and classification},
    author={Morano, Jos{\'e} and Aresta, Guilherme and Bogunovi{\'c}, Hrvoje},
    journal = {Expert Systems with Applications},
    volume = {256},
    pages = {124970},
    year = {2024},
    issn = {0957-4174},
    doi = {10.1016/j.eswa.2024.124970},
}

GitHub Events

Total
  • Create event: 2
  • Release event: 2
  • Issues event: 21
  • Watch event: 22
  • Issue comment event: 12
  • Push event: 18
  • Fork event: 3
Last Year
  • Create event: 2
  • Release event: 2
  • Issues event: 21
  • Watch event: 22
  • Issue comment event: 12
  • Push event: 18
  • Fork event: 3

Issues and Pull Requests

Last synced: 4 months ago

All Time
  • Total issues: 8
  • Total pull requests: 0
  • Average time to close issues: 3 days
  • Average time to close pull requests: N/A
  • Total issue authors: 3
  • Total pull request authors: 0
  • Average comments per issue: 0.63
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 8
  • Pull requests: 0
  • Average time to close issues: 3 days
  • Average time to close pull requests: N/A
  • Issue authors: 3
  • Pull request authors: 0
  • Average comments per issue: 0.63
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • TanXi789 (8)
  • NielsRogge (1)
  • DragonLong18 (1)
  • figcommon (1)
Pull Request Authors
Top Labels
Issue Labels
question (6) documentation (2) bug (1) enhancement (1)
Pull Request Labels

Dependencies

requirements.txt pypi
  • Jinja2 ==3.1.3
  • MarkupSafe ==2.1.5
  • certifi ==2024.2.2
  • charset-normalizer ==3.3.2
  • filelock ==3.13.1
  • fsspec ==2024.2.0
  • idna ==3.6
  • imageio ==2.33.1
  • lazy_loader ==0.3
  • mpmath ==1.3.0
  • networkx ==3.2.1
  • numpy ==1.26.4
  • nvidia-cublas-cu12 ==12.1.3.1
  • nvidia-cuda-cupti-cu12 ==12.1.105
  • nvidia-cuda-nvrtc-cu12 ==12.1.105
  • nvidia-cuda-runtime-cu12 ==12.1.105
  • nvidia-cudnn-cu12 ==8.9.2.26
  • nvidia-cufft-cu12 ==11.0.2.54
  • nvidia-curand-cu12 ==10.3.2.106
  • nvidia-cusolver-cu12 ==11.4.5.107
  • nvidia-cusparse-cu12 ==12.1.0.106
  • nvidia-nccl-cu12 ==2.19.3
  • nvidia-nvjitlink-cu12 ==12.3.101
  • nvidia-nvtx-cu12 ==12.1.105
  • packaging ==23.2
  • pillow ==10.2.0
  • requests ==2.31.0
  • scikit-image ==0.22.0
  • scipy ==1.12.0
  • sympy ==1.12
  • tifffile ==2024.1.30
  • torch ==2.2.0
  • torchvision ==0.17.0
  • triton ==2.2.0
  • typing_extensions ==4.9.0
  • urllib3 ==2.2.0