qspace_deep_learning

Implementation codes for "Jointly estimating parametric maps of multiple diffusion models"

https://github.com/edibella/qspace_deep_learning

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
    2 of 2 committers (100.0%) from academic institutions
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.1%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Implementation codes for "Jointly estimating parametric maps of multiple diffusion models"

Basic Info
  • Host: GitHub
  • Owner: edibella
  • Language: Python
  • Default Branch: main
  • Size: 30.4 MB
Statistics
  • Stars: 10
  • Watchers: 1
  • Forks: 4
  • Open Issues: 0
  • Releases: 0
Created over 4 years ago · Last pushed about 4 years ago
Metadata Files
Readme Citation

README.md

Q-space Deep Learning

https://user-images.githubusercontent.com/1512443/147379103-f3d80ff6-dcac-440b-b829-2111cfb685ae.mov

This repository provides implementation codes for "Jointly estimating parametric maps of multiple diffusion models from undersampled q-space data: A comparison of three deep learning approaches."

Setting up a Python env and installing required packages

To run Python codes provided in this repository, create a Python environment: $ conda create -p /path_to_env/env_name python=3.x and install the follwoing packages $ conda install -p /path_to_env/env_name/ -c anaconda numpy $ conda install -p /path_to_env/env_name/ -c conda-forge matplotlib $ conda install -p /path_to_env/env_name/ -c conda-forge nibabel $ conda install -p /path_to_env/env_name/ -c anaconda h5py $ conda install -p /path_to_env/env_name/ -c conda-forge tqdm $ conda install -p /path_to_env/env_name/ -c conda-forge argparse $ conda install -p /path_to_env/env_name/ pytorch torchvision torchaudio cudatoolkit=x.x -c pytorch where cudatoolkit=x.x in the last command depends on the installed cude version, e. g. 10.2 or 11.3. After these activate your environment using: $ conda activate /path_to_env/env_name

Training and Testing the 1D-qDL network

q-DL-new

The 1D-qDL, as shown in the above block diagram, uese fully connected layers to jointly estimate parametric diffusion maps on a per-voxel basis. The implementation codes are under ./1D-qDL/. The training data are not provided in this repository, but one test dataset, trained models for three undersampling patterns, and the test results can be found at ./Data/.

Training: ``` usage: python train_1d.py [-h] [--sampling SAMPLING] [--batchsize BATCHSIZE] [--numofchannels NUMOFCHANNELS] [--numoflayers NUMOFLAYERS] [--numofhidden NUMOFHIDDEN] [--dropout DROPOUT] [--epochs EPOCHS] [--lr LR] [--datapath DATAPATH]

Training 1D-qDL

optional arguments: -h, --help show this help message and exit --sampling SAMPLING Q-space undersampling pattern name --batchsize BATCHSIZE Training batch size --numofchannels NUMOFCHANNELS Number of qDL input channels --numoflayers NUMOFLAYERS Number of qDL layers --numofhidden NUMOFHIDDEN Number of hidden nodes in qDL --dropout DROPOUT Drop out probability in qDL --epochs EPOCHS Number of training epochs --lr LR Initial learning rate --datapath DATAPATH Path to the data

``` Testing:

``` usage: python test1d.py [-h] [--sampling SAMPLING] [--batchsize BATCHSIZE] [--numofchannels NUMOFCHANNELS] [--numoflayers NUMOFLAYERS] [--numofhidden NUMOFHIDDEN] [--dropout DROPOUT] [--datapath DATAPATH] [--testcases TEST_CASES]

1D-qDL testing

optional arguments: -h, --help show this help message and exit --sampling SAMPLING Q-space undersampling pattern name --batchsize BATCHSIZE Testing batch size --numofchannels NUMOFCHANNELS Number of qDL input channels --numoflayers NUMOFLAYERS Number of qDL layers --numofhidden NUMOFHIDDEN Number of hidden nodes in qDL --dropout DROPOUT Drop out probability in qDL --datapath DATAPATH Path to the data --testcases TESTCASES List of test subjects ids ```

## Training and testing the 2D-CNN network

2D-CNN-New

The 2D-CNN uses convolutional blocks with residual connections to jointly estimate diffusion parametric maps on a per-slice basis. The implementation codes are under ./2D_CNN/.

Training: ``` usage: python train2d.py [-h] [--sampling SAMPLING] [--batchsize BATCHSIZE] [--numofchannels NUMOFCHANNELS] [--epochs EPOCHS] [--lr LR] [--datapath DATA_PATH]

Training 2D-CNN

optional arguments: -h, --help show this help message and exit --sampling SAMPLING Q-space undersampling pattern name --batchsize BATCHSIZE Training batch size --numofchannels NUMOFCHANNELS Number of CNN input channels --epochs EPOCHS Number of training epochs --lr LR Initial learning rate --datapath DATAPATH Path to the data ``` Testing:

``` usage: python test_2d.py [-h] [--sampling SAMPLING] [--batchsize BATCHSIZE] [--numofchannels NUMOFCHANNELS] [--datapath DATAPATH] [--testcases TESTCASES]

2D-CNN testing

optional arguments: -h, --help show this help message and exit --sampling SAMPLING Q-space undersampling pattern name --batchsize BATCHSIZE Testing batch size --numofchannels NUMOFCHANNELS Number of CNN input channels --datapath DATAPATH Path to the data --testcases TESTCASES List of test subjects ids ```

## Training and Testing the MESC-SD network

MESC-SD

The MESC-SD uses a dictionary-based sparse coding representation to jointly estimate parametric diffusion maps using 3D input patches. The implementation codes are under ./3D_MESC_SD/.

Training:

``` usage: python train_3d.py [-h] [--sampling SAMPLING] [--batchsize BATCHSIZE] [--numofchannels NUMOFCHANNELS] [--numofvoxels NUMOFVOXELS] [--numhiddenlstm NUMHIDDENLSTM] [--numhiddenfc NUMHIDDENFC] [--epochs EPOCHS] [--lr LR] [--datapath DATAPATH]

Training MESC-SD

optional arguments: -h, --help show this help message and exit --sampling SAMPLING Q-space undersampling pattern name --batchsize BATCHSIZE Training batch size --numofchannels NUMOFCHANNELS Number of MESCSD input channels --numofvoxels NUMOFVOXELS Number of voxels in 3D input patches --numhiddenlstm NUMHIDDENLSTM Number of hidden nodes in LSTM units --numhiddenfc NUMHIDDENFC Number of hidden nodes in FC layers --epochs EPOCHS Number of training epochs --lr LR Initial learning rate --datapath DATA_PATH Path to the data ```

Testing:

``` usage: python test3d.py [-h] [--sampling SAMPLING] [--batchsize BATCHSIZE] [--numofchannels NUMOFCHANNELS] [--numofvoxels NUMOFVOXELS] [--numhiddenlstm NUMHIDDENLSTM] [--numhiddenfc NUMHIDDENFC] [--datapath DATAPATH] [--testcases TEST_CASES]

MESC-SD testing

optional arguments: -h, --help show this help message and exit --sampling SAMPLING Q-space undersampling pattern name --batchsize BATCHSIZE Testing batch size --numofchannels NUMOFCHANNELS Number of MESCSD input channels --numofvoxels NUMOFVOXELS Number of voxels in 3D input patches --numhiddenlstm NUMHIDDENLSTM Number of hidden nodes in LSTM units --numhiddenfc NUMHIDDENFC Number of hidden nodes in FC layers --datapath DATAPATH Path to the data --testcases TEST_CASES List of test subjects ids`

```

Owner

  • Name: UCAIR DiBella group
  • Login: edibella
  • Kind: user
  • Location: Salt Lake City, UT
  • Company: UCAIR, Department of Radiology and Imaging Sciences, University of Utah

This is the account for the research group of Professor Edward DiBella at UCAIR (Utah Center for Advanced Imaging Research).

Citation (CITATION.cff)

# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!

cff-version: 1.2.0
title: >-
  Jointly estimating parametric maps of multiple
  diffusion models from undersampled q-space data: A
  comparison of three deep learning approaches
message: >-
  If you use the codes in this repository, please
  cite it as below.
type: article
authors:
  - given-names: SeyyedKazem
    family-names: HashemizadehKolowri
    orcid: 'https://orcid.org/0000-0003-3947-1427'
  - given-names: Rong-Rong
    family-names: Chen
  - given-names: Ganesh
    family-names: Adluru
  - given-names: Edward E. V. R.
    family-names: DiBella
    orcid: 'https://orcid.org/0000-0001-9196-3731'
volume: 
number:
start:
end:
month: 1
year: 2022
abstract: >-
  Purpose While advanced diffusion techniques have
  been found valuable in many studies, their clinical
  availability has been hampered partly due to their
  long scan times. Moreover, each diffusion technique
  can only extract a few relevant microstructural
  features. Using multiple diffusion methods may help
  to better understand the brain microstructure,
  which requires multiple expensive model fittings.
  In this work, we compare deep learning (DL)
  approaches to jointly estimate parametric maps of
  multiple diffusion representations/models from
  highly undersampled q-space data. Methods We
  implement three DL approaches to jointly estimate
  parametric maps of diffusion tensor imaging (DTI),
  diffusion kurtosis imaging (DKI), neurite
  orientation dispersion and density imaging (NODDI),
  and multi-compartment spherical mean technique
  (SMT). A per-voxel q-space deep learning (1D-qDL),
  a per-slice convolutional neural network (2D-CNN),
  and a 3D-patch-based microstructure estimation with
  sparse coding using a separable dictionary
  (MESC-SD) network are considered. Results The
  accuracy of estimated diffusion maps depends on the
  q-space undersampling, the selected network
  architecture, and the region and the parameter of
  interest. The smallest errors are observed for the
  MESC-SD network architecture (less than 10\%
  normalized RMSE in most brain regions). Conclusion
  Our experiments show that DL methods are very
  efficient tools to simultaneously estimate several
  diffusion maps from undersampled q-space data.
  These methods can significantly reduce both the
  scan (∼6-fold) and processing times (∼25-fold) for
  estimating advanced parametric diffusion maps while
  achieving a reasonable accuracy.
keywords:
  - >-
    deep learning, joint estimation, multiple
    diffusion models, undersampled q-Space

GitHub Events

Total
Last Year

Committers

Last synced: 12 months ago

All Time
  • Total Commits: 51
  • Total Committers: 2
  • Avg Commits per committer: 25.5
  • Development Distribution Score (DDS): 0.02
Past Year
  • Commits: 0
  • Committers: 0
  • Avg Commits per committer: 0.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
UCAIR DiBella group E****a@h****u 50
Kazem s****i@o****u 1
Committer Domains (Top 20 + Academic)

Issues and Pull Requests

Last synced: 12 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
  • 1nlandempire (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels