LabelProp

LabelProp: A semi-automatic segmentation tool for 3D medical images - Published in JOSS (2025)

https://github.com/nathandecaux/labelprop

Science Score: 93.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
    Found 1 DOI reference(s) in JOSS metadata
  • Academic publication links
    Links to: sciencedirect.com, joss.theoj.org
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
    Published in Journal of Open Source Software

Scientific Fields

Computer Science Computer Science - 84% confidence
Artificial Intelligence and Machine Learning Computer Science - 60% confidence
Last synced: 4 months ago · JSON representation

Repository

Basic Info
  • Host: GitHub
  • Owner: nathandecaux
  • License: cc-by-4.0
  • Language: Python
  • Default Branch: main
  • Size: 9.55 MB
Statistics
  • Stars: 5
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 8
Created almost 4 years ago · Last pushed 7 months ago
Metadata Files
Readme License

README.md

LabelProp : A semi-automatic segmentation tool for 3D medical images

ReadTheDoc status License PyPI Python Version

3D semi-automatic segmentation using deep registration-based 2D label propagation


Check napari-labelprop plugin for use in the napari viewer. See also the napari-labelprop-remote plugin for remote computing.



About

See "Semi-automatic muscle segmentation in MR images using deep registration-based label propagation" paper :

[Paper]Paper [PDF]PDF [GUI]GUI

Installation

Using pip

pip install deep-labelprop

or to get the development version (recommended):

pip git+https://github.com/nathandecaux/labelprop.git

Usage

Data

Labelprop operates semi-automatically, in an intra-subject mode, and can therefore be used with a single scan. The scan must be a gray intensity volume, dimension 3 ( ```HWL``` ). Manual annotations must be supplied in an ```uint8``` file of the same size, where each voxel value corresponds to the label class ( ```0``` as background). Most MRI scans are isotropic on one plane only, due to the thickness of the slice. Manual annotations must be provided in the isotropic plane. Propagation is therefore performed in the 3rd dimension (to be indicated with ```z_axis```). Free-form scribbles/indications can also be supplied. This allows the user to annotate certain parts, without having to completely delineate a cut. In addition, hints can be provided in all planes, and outside the annotated top and bottom section, enabling propagation to be extrapolated. The hints file must be of the same type and size as the manual annotations file, with the same class/label correspondences. To specify a hint as part of the background class, voxels must have the label ```255```. Pretrained weights can be downloaded [here](https://raw.githubusercontent.com/nathandecaux/napari-labelprop/main/pretrained.ckpt).

Basic Usage

Let's consider the following scan ```scan.nii.gz``` and a corresponding segmentation file with 3 annotated slices ```manual_annotation.nii.gz```, and some few freehand annotations in ```hints.nii.gz``` : ![Typical propagation setup](propagation.jpg) Training and propagation can be done for this single scan as follow : ```python import nibabel as ni from labelprop.napari_entry import train_and_infer scan=ni.load('scan.nii.gz').get_fdata() # Numpy array of dimension (H,W,L) manual_annotations=ni.load('manual_annotations.nii.gz').get_fdata() # Numpy array of dimension (H,W,L) and dtype uint8 hints=ni.load('hints.nii.gz').get_fdata() # Numpy array of dimension (H,W,L) and dtype uint8 # Train and propagate propagations=train_and_infer( img=scan, mask=manual_annotations, pretrained_ckpt='pretrained.ckpt', shape=256, # Size of input images for training. max_epochs=100, z_axis=2, # Propagation axis. output_dir='path/to/savedir', name='nameofcheckpoint', pretraining=False, # If True, will pretrain the model without using manual_annotations. hints=hints, # Optional hints for the propagation. Numpy array of dimension (H,W,L) and dtype uint8 ) propagation_up=propagations[0] # Propagation from the bottom to the top propagation_down=propagations[1] # Propagation from the top to the bottom fused_propagated_annotations=propagations # Fusion of propagation_up and propagation_down. # Save results ni.save(ni.Nifti1Image(fused_propagated_annotations,ni.load('img.nii.gz').affine),'propagated_fused.nii.gz') ```

CLI

Basic operations can be done using the command-line interface provided in labelprop.py at the root of the project. #### Pretraining Although Labelprop works on a single scan, it is preferable to pre-train the model on a dataset, with or without manual annotations. ##### Self-supervised To pretrain the model without using any manual annotations : $ labelprop pretrain --help Usage: labelprop.py pretrain [OPTIONS] IMG_LIST Pretrain the model on a list of images. The images are assumed to be greyscale nifti files. IMG_LIST is a text file containing line-separated paths to the images. Options: -s, --shape INTEGER Image size (default: 256) -z, --z_axis INTEGER Axis along which to propagate (default: 2) -o, --output_dir DIRECTORY Output directory for checkpoint -n, --name TEXT Checkpoint name (default : datetime) -e, --max_epochs INTEGER In this case, the model simply learns to register successive sections with each other, without any anatomical constraints on propagation. ##### With annotations Now, to train the model with sparse manual annotations : $ labelprop train-dataset --help Usage: labelprop train-dataset [OPTIONS] IMG_MASK_LIST Train the model on a full dataset. The images are assumed to be greyscale nifti files. Text file containing line-separated paths to greyscale images and comma separated associated mask paths Options: -c FILE Path to the pretrained checkpoint (.ckpt) -s, --shape INTEGER Image size (default: 256) -z, --z_axis INTEGER Axis along which to propagate (default: 2) -o, --output_dir DIRECTORY Output directory for checkpoint -n, --name TEXT Checkpoint name (default : datetime) -e, --max_epochs INTEGER --help Show this message and exit. #### Training $ labelprop train --help Usage: labelprop.py train [OPTIONS] IMG_PATH MASK_PATH Train a model and save the checkpoint and predicted masks. IMG_PATH is a greyscale nifti (.nii.gz or .nii) image, while MASKPATH is it related sparse segmentation. Options: -h, --hints FILE Path to the hints image (.nii.gz) -s, --shape INTEGER Image size (default: 256) -c, --pretrained_ckpt FILE Path to the pretrained checkpoint (.ckpt) -e, --max_epochs INTEGER -z, --z_axis INTEGER Axis along which to propagate (default: 2) -o, --output_dir DIRECTORY Output directory for checkpoint and predicted masks -n, --name TEXT Prefix for the output files (checkpoint and masks) #### Propagating (inference) $ labelprop propagate --help Usage: labelprop.py propagate [OPTIONS] IMG_PATH MASK_PATH CHECKPOINT Propagate labels from sparse segmentation. IMG_PATH is a greyscale nifti (.nii.gz or .nii) image, while MASKPATH is it related sparse segmentation. CHECKPOINT is the path to the checkpoint (.ckpt) file. Options: -h, --hints FILE Path to the hints image (.nii.gz) -s, --shape INTEGER Image size (default: 256) -z, --z_axis INTEGER Axis along which to propagate (default: 2) -l, --label INTEGER Label to propagate (default: 0 = all) -o, --output_dir DIRECTORY Output directory for predicted masks (up, down and fused) -n, --name TEXT Prefix for the output files (masks)

GUI

See this [repo](https://github.com/nathandecaux/napari-labelprop) to use labelprop main functions in Napari (cf. the GIF in the About section). See also [napari-labelprop-remote](https://github.com/nathandecaux/napari-labelprop-remote) to run labelprop in a separate process, locally or remotely, which uses the [API](https://github.com/nathandecaux/labelprop/blob/master/labelprop/api.py):

How to contribute

Anyone wishing to contribute to Labelprop is invited to read the doc here, then create a pull request or create an issue. Contributions concerning the graphical interface, with napari or otherwise, would also be very welcome, and can refer to the napari_entry doc or the API.

Owner

  • Login: nathandecaux
  • Kind: user

JOSS Publication

LabelProp: A semi-automatic segmentation tool for 3D medical images
Published
May 30, 2025
Volume 10, Issue 109, Page 6284
Authors
Nathan Decaux ORCID
LaTIM UMR 1101, Inserm, Brest, France, IMT Atlantique, Brest, France
Pierre-Henri Conze ORCID
LaTIM UMR 1101, Inserm, Brest, France, IMT Atlantique, Brest, France
Juliette Ropars ORCID
LaTIM UMR 1101, Inserm, Brest, France, University Hospital of Brest, Brest, France
Xinyan He
IMT Atlantique, Brest, France
Frances T. Sheehan
Rehabilitation Medicine, NIH, Bethesda, USA
Christelle Pons ORCID
LaTIM UMR 1101, Inserm, Brest, France, University Hospital of Brest, Brest, France, Fondation ILDYS, Brest, France
Douraied Ben Salem ORCID
LaTIM UMR 1101, Inserm, Brest, France, University Hospital of Brest, Brest, France
Sylvain Brochard ORCID
LaTIM UMR 1101, Inserm, Brest, France, University Hospital of Brest, Brest, France
François Rousseau ORCID
LaTIM UMR 1101, Inserm, Brest, France, IMT Atlantique, Brest, France
Editor
Johanna Bayer ORCID
Tags
segmentation deep learning medical images musculoskeletal

GitHub Events

Total
  • Release event: 4
  • Delete event: 2
  • Push event: 4
  • Create event: 4
Last Year
  • Release event: 4
  • Delete event: 2
  • Push event: 4
  • Create event: 4

Committers

Last synced: almost 3 years ago

All Time
  • Total Commits: 56
  • Total Committers: 4
  • Avg Commits per committer: 14.0
  • Development Distribution Score (DDS): 0.393
Top Committers
Name Email Commits
Nathan Decaux n****x@i****r 34
nathandecaux 5****x@u****m 14
nathandecaux n****x@h****r 4
Nathan Decaux n****x@i****r 4
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: over 1 year ago

All Time
  • Total issues: 3
  • Total pull requests: 1
  • Average time to close issues: 17 days
  • Average time to close pull requests: less than a minute
  • Total issue authors: 2
  • Total pull request authors: 1
  • Average comments per issue: 0.67
  • Average comments per pull request: 0.0
  • Merged pull requests: 1
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 2
  • Pull requests: 0
  • Average time to close issues: 13 days
  • Average time to close pull requests: N/A
  • Issue authors: 1
  • Pull request authors: 0
  • Average comments per issue: 0.5
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • animikhaich (2)
  • goanpeca (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 21 last-month
  • Total dependent packages: 1
  • Total dependent repositories: 1
  • Total versions: 8
  • Total maintainers: 1
pypi.org: deep-labelprop

Label propagation using deep registration

  • Versions: 8
  • Dependent Packages: 1
  • Dependent Repositories: 1
  • Downloads: 21 Last month
Rankings
Dependent packages count: 4.8%
Downloads: 19.8%
Dependent repos count: 21.5%
Average: 21.6%
Forks count: 29.8%
Stargazers count: 31.9%
Maintainers (1)
Last synced: 4 months ago

Dependencies

.github/workflows/python-publish.yml actions
  • actions/checkout v3 composite
  • actions/setup-python v3 composite
  • pypa/gh-action-pypi-publish 27b31702a0e7fc50959f5ad993c78deac1bdfc29 composite
requirements.txt pypi
  • Flask ==2.1.0
  • MedPy ==0.4.0
  • kornia ==0.6.5
  • monai ==0.8.1
  • nibabel ==3.2.1
  • numpy ==1.20.3
  • plotext ==4.2.0
  • pytorch_lightning ==1.6.3
  • setuptools ==59.5.0
  • torchio ==0.18.47
setup.py pypi
.github/workflows/draft-pdf.yml actions
  • actions/checkout v3 composite
  • actions/upload-artifact v1 composite
  • openjournals/openjournals-draft-action master composite