Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (5.5%) to scientific vocabulary
Last synced: 7 months ago · JSON representation ·

Repository

Basic Info
  • Host: GitHub
  • Owner: Thanaporn09
  • Language: Python
  • Default Branch: main
  • Size: 9.02 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created over 2 years ago · Last pushed over 2 years ago
Metadata Files
Readme Citation

README.md

Unsupervised Learning-Based Motion Artifact Reduction for Cone-Beam CT via Enhanced Landmark Detection

This is the official pytorch implementation repository of the TriForceNet of from Unsupervised Learning-Based Motion Artifact Reduction for Cone-Beam CT via Enhanced Landmark Detection: https://github.com/Thanaporn09/TriForceNet.git

Dataset

  • We have used the following datasets:
    • 4D XCAT Head CBCT dataset: Segars, W.P., Sturgeon, G., Mendonca, S., Grimes, J., Tsui, B.M.: 4d xcat phantom for multimodality imaging research. Medical physics 37(9), 4902–4915 (2010)

Prerequesites

  • Python 3.7
  • MMpose 0.23

Usage of the code

  • Dataset format
    • The dataset structure should be in the following structure:

inputs: .PNG images and JSON file └── <dataset name> ├── 2D_images | ├── 001.png │ ├── 002.png │ ├── 003.png │ ├── ... | └── JSON ├── train.json └── test.json - Output: 2D landmark coordinates

  • Train the model

    • To train the TriForceNet model, run sh train.sh: # sh train.sh CUDA_VISIBLE_DEVICES=gpu_ids PORT=PORT_NUM ./tools/dist_train.sh \ config_file_path num_gpus
  • Evaluation

    • To evaluate the trained TriForceNet model, run sh test.sh: # sh test.sh CUDA_VISIBLE_DEVICES=gpu_id PORT=29504 ./tools/dist_test.sh config_file_path \ model_weight_path num_gpus \ # For evaluation of the Head XCAT dataset, use: --eval 'MRE_h','MRE_std_h','SDR_2_h','SDR_2.5_h','SDR_3_h','SDR_4_h'

Owner

  • Login: Thanaporn09
  • Kind: user

Citation (CITATION.cff)

message: "(Citaion will be upated) If you use this code, please cite it as below."
authors:
  - name: "Thanaporn Viriyasaranon, Serie Ma, and Jang-Hwan Choi"
title: "Anatomical Landmark Detection Using a Multiresolution Learning Approach with a Hybrid Transformer-CNN Model"
date-released: 2020-05-30
url: "https://github.com/seriee/Multiresolution-Learning-based-Hybrid-Transformer-CNN-Model-for-Anatomical-Landmark-Detection"

GitHub Events

Total
Last Year

Dependencies

docker/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
docker/serve/Dockerfile docker
  • pytorch/pytorch ${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel build
requirements/build.txt pypi
  • numpy *
  • torch >=1.3
requirements/docs.txt pypi
  • docutils ==0.16.0
  • myst-parser *
  • sphinx ==4.0.2
  • sphinx_copybutton *
  • sphinx_markdown_tables *
requirements/mminstall.txt pypi
  • mmcv-full >=1.3.8
  • mmdet >=2.14.0
  • mmtrack >=0.6.0
requirements/optional.txt pypi
  • albumentations >=0.3.2
  • onnx *
  • onnxruntime *
  • pyrender *
  • requests *
  • smplx >=0.1.28
  • trimesh *
requirements/readthedocs.txt pypi
  • mmcv-full *
  • munkres *
  • regex *
  • scipy *
  • titlecase *
  • torch *
  • torchvision *
  • xtcocotools >=1.8
requirements/runtime.txt pypi
  • chumpy *
  • dataclasses *
  • json_tricks *
  • matplotlib *
  • munkres *
  • numpy *
  • opencv-python *
  • pillow *
  • scipy *
  • torchvision *
  • xtcocotools >=1.8
requirements/tests.txt pypi
  • coverage * test
  • flake8 * test
  • interrogate * test
  • isort ==4.3.21 test
  • pytest * test
  • pytest-runner * test
  • smplx >=0.1.28 test
  • xdoctest >=0.10.0 test
  • yapf * test
requirements.txt pypi
setup.py pypi