voxelmorph

Unsupervised Learning for Image Registration

https://github.com/voxelmorph/voxelmorph

Science Score: 46.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org
  • Committers with academic emails
    11 of 20 committers (55.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.9%) to scientific vocabulary

Keywords

deep-learning diffeomorphism image-alignment image-registration machine-learning optical-flow probabilistic unsupervised-learning
Last synced: 6 months ago · JSON representation

Repository

Unsupervised Learning for Image Registration

Basic Info
  • Host: GitHub
  • Owner: voxelmorph
  • License: apache-2.0
  • Language: Python
  • Default Branch: dev
  • Homepage:
  • Size: 127 MB
Statistics
  • Stars: 2,530
  • Watchers: 52
  • Forks: 611
  • Open Issues: 146
  • Releases: 0
Topics
deep-learning diffeomorphism image-alignment image-registration machine-learning optical-flow probabilistic unsupervised-learning
Created almost 8 years ago · Last pushed 6 months ago
Metadata Files
Readme License Citation

README.md

VoxelMorph: learning-based image registration

VoxelMorph is a general purpose library for learning-based tools for alignment/registration, and more generally modelling with deformations.

Tutorial

We have several VoxelMorph tutorials: - the main VoxelMorph tutorial explains VoxelMorph and learning-based registration - a deformable SynthMorph demo showing how to train a registration model without data - an affine SynthMorph demo on learning anatomy-aware and acquisition-agnostic affine registration - a CT-to-MRI SynthMorph demo clipping the Hounsfield scale for multi-modal registration - a SynthMorph shapes demo that walks through the steps of running a trained 3D shapes model - a tutorial on training VoxelMorph on OASIS data, which we processed and released for free for HyperMorph - an additional small tutorial on warping annotations together with images - another tutorial on template (atlas) construction with VoxelMorph - visualize warp as warped grid - inverting warps that are not diffeomorphisms

Instructions

To use the VoxelMorph library, either clone this repository and install the requirements listed in setup.py or install directly with pip.

pip install voxelmorph

Pre-trained models

See list of pre-trained models available here.

Training

If you would like to train your own model, you will likely need to customize some of the data-loading code in voxelmorph/generators.py for your own datasets and data formats. However, it is possible to run many of the example scripts out-of-the-box, assuming that you provide a list of filenames in the training dataset. Training data can be in the NIfTI, MGZ, or npz (numpy) format, and it's assumed that each npz file in your data list has a vol parameter, which points to the image data to be registered, and an optional seg variable, which points to a corresponding discrete segmentation (for semi-supervised learning). It's also assumed that the shape of all training image data is consistent, but this, of course, can be handled in a customized generator if desired.

For a given image list file /images/list.txt and output directory /models/output, the following script will train an image-to-image registration network (described in MICCAI 2018 by default) with an unsupervised loss. Model weights will be saved to a path specified by the --model-dir flag.

./scripts/tf/train.py --img-list /images/list.txt --model-dir /models/output --gpu 0

The --img-prefix and --img-suffix flags can be used to provide a consistent prefix or suffix to each path specified in the image list. Image-to-atlas registration can be enabled by providing an atlas file, e.g. --atlas atlas.npz. If you'd like to train using the original dense CVPR network (no diffeomorphism), use the --int-steps 0 flag to specify no flow integration steps. Use the --help flag to inspect all of the command line options that can be used to fine-tune network architecture and training.

Registration

If you simply want to register two images, you can use the register.py script with the desired model file. For example, if we have a model model.h5 trained to register a subject (moving) to an atlas (fixed), we could run:

./scripts/tf/register.py --moving moving.nii.gz --fixed atlas.nii.gz --moved warped.nii.gz --model model.h5 --gpu 0

This will save the moved image to warped.nii.gz. To also save the predicted deformation field, use the --save-warp flag. Both npz or nifty files can be used as input/output in this script.

Testing (measuring Dice scores)

To test the quality of a model by computing dice overlap between an atlas segmentation and warped test scan segmentations, run:

./scripts/tf/test.py --model model.h5 --atlas atlas.npz --scans scan01.npz scan02.npz scan03.npz --labels labels.npz

Just like for the training data, the atlas and test npz files include vol and seg parameters and the labels.npz file contains a list of corresponding anatomical labels to include in the computed dice score.

Parameter choices

CVPR version

For the CC loss function, we found a reg parameter of 1 to work best. For the MSE loss function, we found 0.01 to work best.

MICCAI version

For our data, we found image_sigma=0.01 and prior_lambda=25 to work best.

In the original MICCAI code, the parameters were applied after the scaling of the velocity field. With the newest code, this has been "fixed", with different default parameters reflecting the change. We recommend running the updated code. However, if you'd like to run the very original MICCAI2018 mode, please use xy indexing and use_miccai_int network option, with MICCAI2018 parameters.

Spatial transforms and integration

  • The spatial transform code, found at voxelmorph.layers.SpatialTransformer, accepts N-dimensional affine and dense transforms, including linear and nearest neighbor interpolation options. Note that original development of VoxelMorph used xy indexing, whereas we are now emphasizing ij indexing.

  • For the MICCAI2018 version, we integrate the velocity field using voxelmorph.layers.VecInt. By default we integrate using scaling and squaring, which we found efficient.

VoxelMorph papers

If you use VoxelMorph or some part of the code, please cite (see bibtex):

Notes

  • keywords: machine learning, convolutional neural networks, alignment, mapping, registration
  • data in papers: In our initial papers, we used publicly available data, but unfortunately we cannot redistribute it (due to the constraints of those datasets). We do a certain amount of pre-processing for the brain images we work with, to eliminate sources of variation and be able to compare algorithms on a level playing field. In particular, we perform FreeSurfer recon-all steps up to skull stripping and affine normalization to Talairach space, and crop the images via ((48, 48), (31, 33), (3, 29)).

We encourage users to download and process their own data. See a list of medical imaging datasets here. Note that you likely do not need to perform all of the preprocessing steps, and indeed VoxelMorph has been used in other work with other data.

Creation of deformable templates

To experiment with this method, please use train_template.py for unconditional templates and train_cond_template.py for conditional templates, which use the same conventions as VoxelMorph (please note that these files are less polished than the rest of the VoxelMorph library).

We've also provided an unconditional atlas in data/generated_uncond_atlas.npz.npy.

Models in h5 format weights are provided for unconditional atlas here, and conditional atlas here.

Explore the atlases interactively here with tipiX!

SynthMorph

SynthMorph is a strategy for learning registration without acquired imaging data, producing powerful networks agnostic to contrast induced by MRI (eprint arXiv:2004.10282). For a video and a demo showcasing the steps of generating random label maps from noise distributions and using these to train a network, visit synthmorph.voxelmorph.net.

We provide model files for a "shapes" variant of SynthMorph, that we train using images synthesized from random shapes only, and a "brains" variant, that we train using images synthesized from brain label maps. We train the brains variant by optimizing a loss term that measures volume overlap of a selection of brain labels. For registration with either model, please use the register.py script with the respective model weights.

Accurate registration requires the input images to be min-max normalized, such that voxel intensities range from 0 to 1, and to be resampled in the affine space of a reference image. The affine registration can be performed with a variety of packages, and we choose FreeSurfer. First, we skull-strip the images with SAMSEG, keeping brain labels only. Second, we run mrirobustregister:

mri_robust_register --mov in.nii.gz --dst out.nii.gz --lta transform.lta --satit --iscale mri_robust_register --mov in.nii.gz --dst out.nii.gz --lta transform.lta --satit --iscale --ixform transform.lta --affine

where we replace --satit --iscale with --cost NMI for registration across MRI contrasts.

Data

While we cannot release most of the data used in the VoxelMorph papers as they prohibit redistribution, we thorough processed and re-released OASIS1 while developing HyperMorph. We now include a quick VoxelMorph tutorial to train VoxelMorph on neurite-oasis data.

Contact

For any code-related problems or questions please open an issue or start a discussion of general registration/VoxelMorph topics.

Owner

  • Login: voxelmorph
  • Kind: user

dev team for voxelmorph - learning based image registration

GitHub Events

Total
  • Issues event: 17
  • Watch event: 217
  • Issue comment event: 28
  • Push event: 31
  • Pull request event: 18
  • Fork event: 32
  • Create event: 1
Last Year
  • Issues event: 17
  • Watch event: 217
  • Issue comment event: 28
  • Push event: 31
  • Pull request event: 18
  • Fork event: 32
  • Create event: 1

Committers

Last synced: 9 months ago

All Time
  • Total Commits: 543
  • Total Committers: 20
  • Avg Commits per committer: 27.15
  • Development Distribution Score (DDS): 0.622
Past Year
  • Commits: 13
  • Committers: 1
  • Avg Commits per committer: 13.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
adalca a****a@m****u 205
Andrew Hoopes a****1@g****m 177
Malte Hoffmann m****n@m****u 79
Danielle Pace d****e@m****u 27
Bruce Fischl f****l@n****u 25
voxelmorph 3****h 6
Steffen Czolbe s****b@g****m 3
Adrian Dalca a****a@m****u 3
Adrian Dalca a****a@t****u 3
Guha Balakrishnan b****g@v****u 3
Neel Dey 4****y 2
adalca a****2@t****u 2
Avnish Kumar a****s 1
Danny 3****6 1
Katie Lewis k****s@m****u 1
balakg b****g@m****u 1
mariannerakic 3****c 1
raisingbits 3****s 1
Paul Weinmann w****n@l****e 1
wycfutures 5****s 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 145
  • Total pull requests: 69
  • Average time to close issues: about 1 month
  • Average time to close pull requests: 27 days
  • Total issue authors: 111
  • Total pull request authors: 22
  • Average comments per issue: 2.72
  • Average comments per pull request: 0.19
  • Merged pull requests: 42
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 12
  • Pull requests: 16
  • Average time to close issues: 4 days
  • Average time to close pull requests: about 9 hours
  • Issue authors: 12
  • Pull request authors: 4
  • Average comments per issue: 0.83
  • Average comments per pull request: 0.06
  • Merged pull requests: 7
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • Paluck1Arora2 (14)
  • dfvr1994 (3)
  • domadaaaa (3)
  • annareithmeir (3)
  • lyqqhgzj (2)
  • burrusa (2)
  • keepgooooing (2)
  • Fjr9516 (2)
  • flealq (2)
  • zoumaxingjiulic (2)
  • MaybeRichard (2)
  • RuiqiGeng (2)
  • dyhan316 (2)
  • xiaogengen-007 (2)
  • bhatrana (2)
Pull Request Authors
  • mu40 (46)
  • ywyz233 (4)
  • neel-dey (3)
  • hy23-lfb (3)
  • MaybeRichard (2)
  • aviziskind (2)
  • lxy-quantum (2)
  • raisingbits (1)
  • mariannerakic (1)
  • MengjinDong (1)
  • ajinkya-kulkarni (1)
  • mabulnaga (1)
  • kvttt (1)
  • Khoa-NT (1)
  • layjain (1)
Top Labels
Issue Labels
voxelmorph (19) pytorch (2) hypermorph (2) lung (1)
Pull Request Labels

Packages

  • Total packages: 1
  • Total downloads:
    • pypi 6,496 last-month
  • Total docker downloads: 923
  • Total dependent packages: 0
  • Total dependent repositories: 6
  • Total versions: 2
  • Total maintainers: 1
pypi.org: voxelmorph

Image Registration with Convolutional Networks

  • Versions: 2
  • Dependent Packages: 0
  • Dependent Repositories: 6
  • Downloads: 6,496 Last month
  • Docker Downloads: 923
Rankings
Stargazers count: 1.6%
Docker downloads count: 1.7%
Forks count: 2.2%
Average: 4.4%
Downloads: 4.8%
Dependent repos count: 6.0%
Dependent packages count: 10.1%
Maintainers (1)
Last synced: 6 months ago

Dependencies

setup.py pypi
  • h5py *
  • neurite >=0.2
  • nibabel *
  • numpy *
  • packaging *
  • scikit-image *
  • scipy *
voxelmorph.egg-info/requires.txt pypi
  • h5py *
  • neurite >=0.2
  • nibabel *
  • numpy *
  • packaging *
  • scikit-image *
  • scipy *