Volume Segmantics

Volume Segmantics: A Python Package for Semantic Segmentation of Volumetric Data Using Pre-trained PyTorch Deep Learning Models - Published in JOSS (2022)

https://github.com/diamondlightsource/volume-segmantics

Science Score: 98.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 18 DOI reference(s) in README and JOSS metadata
  • Academic publication links
    Links to: joss.theoj.org
  • Committers with academic emails
    1 of 1 committers (100.0%) from academic institutions
  • Institutional organization owner
    Organization diamondlightsource has institutional domain (www.diamond.ac.uk)
  • JOSS paper metadata
    Published in Journal of Open Source Software

Scientific Fields

Artificial Intelligence and Machine Learning Computer Science - 80% confidence
Last synced: 6 months ago · JSON representation

Repository

A toolkit for semantic segmentation of volumetric data using PyTorch deep learning models

Basic Info
  • Host: GitHub
  • Owner: DiamondLightSource
  • License: apache-2.0
  • Language: Python
  • Default Branch: main
  • Size: 15.6 MB
Statistics
  • Stars: 9
  • Watchers: 3
  • Forks: 8
  • Open Issues: 0
  • Releases: 11
Archived
Created over 3 years ago · Last pushed over 2 years ago
Metadata Files
Readme Contributing License Code of conduct

README.md

Development is moving

Development of this package is moving to the Rosalind Franklin Institute. A fork is now available at https://github.com/rosalindfranklininstitute/volume-segmantics

Volume Segmantics

A toolkit for semantic segmentation of volumetric data using PyTorch deep learning models.

DOI example workflow example workflow

Volume Segmantics provides a simple command-line interface and API that allows researchers to quickly train a variety of 2D PyTorch segmentation models (e.g. U-Net, U-Net++, FPN, DeepLabV3+) on their 3D datasets. These models use pre-trained encoders, enabling fast training on small datasets. Subsequently, the library enables using these trained models to segment larger 3D datasets, automatically merging predictions made in orthogonal planes and rotations to reduce artifacts that may result from predicting 3D segmentation using a 2D network.

Given a 3d image volume and corresponding dense labels (the segmentation), a 2d model is trained on image slices taken along the x, y, and z axes. The method is optimised for small training datasets, e.g a single dataset in between $128^3$ and $512^3$ pixels. To achieve this, all models use pre-trained encoders and image augmentations are used to expand the size of the training dataset.

This work utilises the abilities afforded by the excellent segmentation-models-pytorch library in combination with augmentations made available via Albumentations. Also the metrics and loss functions used make use of the hard work done by Adrian Wolny in his pytorch-3dunet repository.

Requirements

A machine capable of running CUDA enabled PyTorch version 1.7.1 or greater is required. This generally means a reasonably modern NVIDIA GPU. The exact requirements differ according to operating system. For example on Windows you will need Visual Studio Build Tools as well as CUDA Toolkit installed see the CUDA docs for more details.

Installation

The easiest way to install the package is to first create a new conda environment or virtualenv with python (ideally >= version 3.8) and also pip, then activate the environment and pip install volume-segmantics. If a CUDA-enabled build of PyTorch is not being installed by pip, you can try pip install volume-segmantics --extra-index-url https://download.pytorch.org/whl this particularity seems to be an issue on Windows.

Configuration and command line use

After installation, two new commands will be available from your terminal whilst your environment is activated, model-train-2d and model-predict-2d.

These commands require access to some settings stored in YAML files. These need to be located in a directory named volseg-settings within the directory where you are running the commands. The settings files can be copied from here.

The file 2d_model_train_settings.yaml can be edited in order to change training parameters such as number of epochs, loss functions, evaluation metrics and also model and encoder architectures. The file 2d_model_predict_settings.yaml can be edited to change parameters such as the prediction "quality" e.g "low" quality refers to prediction of the volume segmentation by taking images along a single axis (images in the (x,y) plane). For "medium" and "high" quality, predictions are done along 3 axes and in 12 directions (3 axes, 4 rotations) respectively, before being combined by maximum probability.

For training a 2d model on a 3d image volume and corresponding labels

Run the following command. Input files can be in HDF5 or multi-page TIFF format.

shell model-train-2d --data path/to/image/data.h5 --labels path/to/corresponding/segmentation/labels.h5

Paths to multiple data and label volumes can be added after the --data and --labels flags respectively. A model will be trained according to the settings defined in /volseg-settings/2d_model_train_settings.yaml and saved to your working directory. In addition, a figure showing "ground truth" segmentation vs model segmentation for some images in the validation set will be saved.

For 3d volume segmentation prediction using a 2d model

Run the following command. Input image files can be in HDF5 or multi-page TIFF format.

shell model-predict-2d path/to/model_file.pytorch path/to/data_for_prediction.h5

The input data will be segmented using the input model following the settings specified in volseg-settings/2d_model_predict_settings.yaml. An HDF5 file containing the segmented volume will be saved to your working directory.

Tutorial using example data

A tutorial is available here that provides a walk-through of how to segment blood vessels from synchrotron X-ray micro-CT data collected on a sample of human placental tissue.

Currently supported model architectures and encoders

The model architectures which are currently available and tested are: - U-Net - U-Net++ - FPN - DeepLabV3 - DeepLabV3+ - MA-Net - LinkNet - PAN

The pre-trained encoders that can be used with these architectures are: - ResNet-34 - ResNet50 - ResNeXt-50_32x4d - Efficientnet-b3 - Efficientnet-b4 - Resnest50d* - Resnest101e*

* Encoders with asterisk not compatible with PAN.

Using the API

You can use the functionality of the package in your own program via the API, this is documented here. This interface is the one used by SuRVoS2, a client/server GUI application that allows fast annotation and segmentation of volumetric data.

Contributing

We welcome contributions from the community. Please take a look at out contribution guidelines for more information.

Citation

If you use this package for you research, please cite:

King O.N.F, Bellos, D. and Basham, M. (2022). Volume Segmantics: A Python Package for Semantic Segmentation of Volumetric Data Using Pre-trained PyTorch Deep Learning Models. Journal of Open Source Software, 7(78), 4691. doi: 10.21105/joss.04691

bibtex @article{King2022, doi = {10.21105/joss.04691}, url = {https://doi.org/10.21105/joss.04691}, year = {2022}, publisher = {The Open Journal}, volume = {7}, number = {78}, pages = {4691}, author = {Oliver N. F. King and Dimitrios Bellos and Mark Basham}, title = {Volume Segmantics: A Python Package for Semantic Segmentation of Volumetric Data Using Pre-trained PyTorch Deep Learning Models}, journal = {Journal of Open Source Software} }

References

Albumentations

Buslaev, A., Iglovikov, V.I., Khvedchenya, E., Parinov, A., Druzhinin, M., and Kalinin, A.A. (2020). Albumentations: Fast and Flexible Image Augmentations. Information 11. https://doi.org/10.3390/info11020125

Segmentation Models PyTorch

Yakubovskiy, P. (2020). Segmentation Models Pytorch. GitHub

PyTorch-3dUnet

Wolny, A., Cerrone, L., Vijayan, A., Tofanelli, R., Barro, A.V., Louveaux, M., Wenzl, C., Strauss, S., Wilson-Sánchez, D., Lymbouridou, R., et al. (2020). Accurate and versatile 3D segmentation of plant tissues at cellular resolution. ELife 9, e57613. https://doi.org/10.7554/eLife.57613

Owner

  • Name: Diamond Light Source
  • Login: DiamondLightSource
  • Kind: organization

JOSS Publication

Volume Segmantics: A Python Package for Semantic Segmentation of Volumetric Data Using Pre-trained PyTorch Deep Learning Models
Published
October 09, 2022
Volume 7, Issue 78, Page 4691
Authors
Oliver N. f. King ORCID
Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire, UK
Dimitrios Bellos ORCID
Rosalind Franklin Institute, Harwell Science and Innovation Campus, Didcot, Oxfordshire, UK
Mark Basham ORCID
Diamond Light Source Ltd., Harwell Science and Innovation Campus, Didcot, Oxfordshire, UK, Rosalind Franklin Institute, Harwell Science and Innovation Campus, Didcot, Oxfordshire, UK
Editor
Øystein Sørensen ORCID
Tags
segmentation deep learning volumetric images pre-trained

GitHub Events

Total
Last Year

Committers

Last synced: 7 months ago

All Time
  • Total Commits: 234
  • Total Committers: 1
  • Avg Commits per committer: 234.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Olly King o****g@d****k 234
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 6
  • Total pull requests: 26
  • Average time to close issues: about 20 hours
  • Average time to close pull requests: about 15 hours
  • Total issue authors: 1
  • Total pull request authors: 2
  • Average comments per issue: 0.0
  • Average comments per pull request: 0.0
  • Merged pull requests: 23
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • osorensen (6)
Pull Request Authors
  • OllyK (24)
  • markbasham (2)
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 2
  • Total downloads:
    • pypi 67 last-month
  • Total dependent packages: 0
    (may contain duplicates)
  • Total dependent repositories: 1
    (may contain duplicates)
  • Total versions: 37
  • Total maintainers: 2
pypi.org: volume-segmantics-vsui

A toolkit for semantic segmentation of volumetric data using pyTorch deep learning models

  • Versions: 26
  • Dependent Packages: 0
  • Dependent Repositories: 0
  • Downloads: 34 Last month
Rankings
Dependent packages count: 6.6%
Downloads: 13.3%
Forks count: 17.3%
Average: 17.5%
Stargazers count: 19.5%
Dependent repos count: 30.6%
Maintainers (1)
Last synced: 6 months ago
pypi.org: volume-segmantics

A toolkit for semantic segmentation of volumetric data using pyTorch deep learning models

  • Versions: 11
  • Dependent Packages: 0
  • Dependent Repositories: 1
  • Downloads: 33 Last month
Rankings
Dependent packages count: 10.1%
Forks count: 14.3%
Stargazers count: 17.7%
Average: 20.1%
Dependent repos count: 21.6%
Downloads: 36.7%
Maintainers (1)
Last synced: 6 months ago

Dependencies

.github/workflows/docs.yml actions
  • actions/cache v2 composite
  • actions/checkout v3 composite
  • actions/deploy-pages v1 composite
  • actions/setup-python v3 composite
  • actions/upload-artifact v3 composite
  • snok/install-poetry v1 composite
.github/workflows/release.yml actions
  • JRubics/poetry-publish v1.12 composite
  • actions/checkout v3 composite
  • actions/checkout v2 composite
  • actions/upload-artifact v1 composite
  • djn24/add-asset-to-release v1 composite
  • vimtor/action-zip v1 composite
.github/workflows/tests.yml actions
  • actions/cache v3 composite
  • actions/checkout v3 composite
  • actions/setup-python v4 composite
  • snok/install-poetry v1 composite
pyproject.toml pypi
  • black >=22.1.0 develop
  • pdoc >=10 develop
  • pylint >=2.4.0 develop
  • pytest >=6 develop
  • pytest-cov * develop
  • albumentations <=1.1.0
  • h5py ^3.0.0
  • imagecodecs *
  • matplotlib ^3.3.0
  • numpy ^1.18.0
  • python ^3.7
  • segmentation-models-pytorch ^0.2.1
  • termplotlib ^0.3.6
  • torch >=1.7.1, <1.13.0