sat-ds

The official repository to build SAT-DS, a medical data collection of over 72 public segmentation datasets, contains over 22K 3D images, 302K segmentation masks and 497 classes from 3 different modalities (MRI, CT, PET) and 8 human body regions.

https://github.com/zhaoziheng/sat-ds

Science Score: 54.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
    Links to: arxiv.org, zenodo.org
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.9%) to scientific vocabulary
Last synced: 6 months ago · JSON representation ·

Repository

The official repository to build SAT-DS, a medical data collection of over 72 public segmentation datasets, contains over 22K 3D images, 302K segmentation masks and 497 classes from 3 different modalities (MRI, CT, PET) and 8 human body regions.

Basic Info
  • Host: GitHub
  • Owner: zhaoziheng
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 2.39 MB
Statistics
  • Stars: 113
  • Watchers: 2
  • Forks: 2
  • Open Issues: 0
  • Releases: 0
Created almost 2 years ago · Last pushed 6 months ago
Metadata Files
Readme Citation

README.md

SAT-DS

Dropbox arXiv Model

This is the official repository to build SAT-DS, a medical data collection of 72 public segmentation datasets, contains over 22K 3D images, 302K segmentation masks and 497 classes from 3 different modalities (MRI, CT, PET) and 8 human body regions. 🚀

Based on this data collection, we build an universal segmentation model for 3D radiology scans driven by text prompts (check this repo and our paper).

We have added 7 more datasets absent from SAT-DS, check the table following. The data collection will continuously growing, stay tuned!

Hightlight

🎉 To save your time from downloading and preprocess so many datasets, we offer shortcut download links of 42/72 datasets in SAT-DS, which allow re-attribution with licenses such as CC BY-SA. Find them in dropbox or baiduyun.

All these datasets are preprocessed and packaged by us for your convenience, ready for immediate use upon download and extraction. Download the datasets you need and unzip them in data/nii, these datasets can be used immediately with the paired jsonl files in data/jsonl, check Step 3 below for how to use them. Note that we respect and adhere to the licenses of all the datasets, if we incorrectly reattribute any of them, please contact us.

What we have done in building SAT-DS:

  • Collect as many public datasets as possible for 3D medical segmentation, and compile their basic information;
  • Check and normalize image scans in each dataset, including orientation, spacing and intensity;
  • Check, standardize, and merge the label names for categories in each dataset;
  • Carefully split each dataset into train and test set by the patient id.

What we offer in this repo:

  • (Step 1) Access to each dataset in SAT-DS.
  • (Step 2) Code to preprocess samples in each dataset.
  • (Shortcut to skip Step 1 and 2) Access to preprocessed and packaged datasets that can be used immediately.
  • (Step 3) Code to load samples with normalized image, standardized class names from each dataset.
  • (Step 3) Code to visualize and check the samples.
  • (Step 4) Code to prepare the train and evaluation data for SAT in required format.
  • (Step 5) Code to split the dataset into train and test in consistent with SAT.

This repo can be used to:

  • (Follow step 1~3) Preprocess and unfied a large-scale and comprehensive 3D medical segmentation data collection, suitable to train or finetune universal segmentation models like SAM2.
  • (Follow step 1~6) Prepare the training and test data in required format for SAT.

SAT-DS for benchmarking

We provide the detailed configurations of all the specialist models (nnU-Nets, U-Mambas, SwinUNETR) we have trained and evaluated on each of these datasets, check them here. nnU-Nets are trained and evaluated following the official guidance, while U-Mamba and SwinUNETR follows the official implementation of U-Mamba. Their results are reported in our paper.

Check our paper "One Model to Rule them All: Towards Universal Segmentation for Medical Images with Text Prompts" for more details.

ArXiv

Website

Example Figure

Step 1: Download datasets

This is the detailed list of all the datasets and their official download links. Dataset marked with * are absent from SAT-DS and thus we do not providing train-test split file. Citation information of each dataset can be found in citation.bib .

As a shortcut, we preprocess, package and re-attribute some of them for your convenient use. Download them here. | Dataset Name | Modality | Region | Classes | Scans | Download link | |---------------------------|----------|---------------|---------|-------|----------------------------------------------------------------------------------------------------| | AbdomenCT1K | CT | Abdomen | 4 | 988 | https://github.com/JunMa11/AbdomenCT-1K | | ACDC | CT | Thorax | 4 | 300 | https://humanheart-project.creatis.insa-lyon.fr/database/ | | AMOS CT | CT | Abdomen | 16 | 300 | https://zenodo.org/records/7262581 | | AMOS MRI | MRI | Thorax | 16 | 60 | https://zenodo.org/records/7262581 | | ATLASR2 | MRI | Brain | 1 | 654 | http://fcon1000.projects.nitrc.org/indi/retro/atlas.html | | ATLAS | MRI | Abdomen | 2 | 60 | https://atlas-challenge.u-bourgogne.fr | | autoPET | PET | Whole Body | 1 | 501 | https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=93258287 | | Brain Atlas | MRI | Brain | 108 | 30 | http://brain-development.org/ | | BrainPTM | MRI | Brain | 7 | 60 | https://brainptm-2021.grand-challenge.org/ | | BraTS2023 GLI | MRI | Brain | 4 | 5004 | https://www.synapse.org/#!Synapse:syn51514105 | | BraTS2023 MEN | MRI | Brain | 4 | 4000 | https://www.synapse.org/#!Synapse:syn51514106 | | BraTS2023 MET | MRI | Brain | 4 | 951 | https://www.synapse.org/#!Synapse:syn51514107 | | BraTS2023 PED | MRI | Brain | 4 | 396 | https://www.synapse.org/#!Synapse:syn51514108 | | BraTS2023 SSA | MRI | Brain | 4 | 240 | https://www.synapse.org/#!Synapse:syn51514109 | | BTCV Abdomen | CT | Abdomen | 15 | 30 | https://www.synapse.org/#!Synapse:syn3193805/wiki/217789 | | BTCV Cervix | CT | Abdomen | 4 | 30 | https://www.synapse.org/Synapse:syn3378972 | | CHAOS CT | CT | Abdomen | 1 | 20 | https://chaos.grand-challenge.org/ | | CHAOS MRI | MRI | Abdomen | 5 | 60 | https://chaos.grand-challenge.org/ | | CMRxMotion | MRI | Thorax | 4 | 138 | https://www.synapse.org/#!Synapse:syn28503327/files/ | | Couinaud | CT | Abdomen | 10 | 161 | https://github.com/GLCUnet/dataset | | COVID-19 CT Seg | CT | Thorax | 4 | 20 | https://github.com/JunMa11/COVID-19-CT-Seg-Benchmark | | CrossMoDA2021 | MRI | Head and Neck | 2 | 105 | https://crossmoda.grand-challenge.org/Data/ | | CT-ORG | CT | Whole Body | 6 | 140 | https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=61080890 | | CTPelvic1K | CT | Lower Limb | 5 | 117 | https://zenodo.org/record/4588403#.YEyLq0zaCo | | DAP Atlas | CT | Whole Body | 179 | 533 | https://github.com/alexanderjaus/AtlasDataset | | FeTA2022 | MRI | Brain | 7 | 80 | https://feta.grand-challenge.org/data-download/ | | FLARE22 | CT | Abdomen | 15 | 50 | https://flare22.grand-challenge.org/ | | FUMPE | CT | Thorax | 1 | 35 | https://www.kaggle.com/datasets/andrewmvd/pulmonary-embolism-in-ct-images | | HAN Seg | CT | Head and Neck | 41 | 41 | https://zenodo.org/record/ | | HECKTOR2022 | PET | Head and Neck | 2 | 524 | https://hecktor.grand-challenge.org/Data/ | | INSTANCE | CT | Brain | 1 | 100 | https://instance.grand-challenge.org/Dataset/ | | ISLES2022 | MRI | Brain | 1 | 500 | http://www.isles-challenge.org/ | | KiPA22 | CT | Abdomen | 4 | 70 | https://kipa22.grand-challenge.org/dataset/ | | KiTS23 | CT | Abdomen | 3 | 489 | https://github.com/neheller/kits23 | | LAScarQS2022 Task 1 | MRI | Thorax | 2 | 60 | https://zmiclab.github.io/projects/lascarqs22/data.html | | LAScarQS2022 Task 2 | MRI | Thorax | 1 | 130 | https://zmiclab.github.io/projects/lascarqs22/data.html | | LNDb | CT | Thorax | 1 | 236 | https://zenodo.org/record/7153205#.YzoVHbMJPZ | | LUNA16 | CT | Thorax | 1 | 888 | https://luna16.grand-challenge.org/ | | MM-WHS CT | CT | Thorax | 9 | 40 | https://mega.nz/folder/UNMF2YYI#1cqJVzo4pwESv9Ppc8uA | | MM-WHS MR | MRI | Thorax | 9 | 40 | https://mega.nz/folder/UNMF2YYI#1cqJVzo4pwESv9Ppc8uA | | MRSpineSeg | MRI | Spine | 23 | 91 | https://www.cg.informatik.uni-siegen.de/en/spine-segmentation-and-analysis | | MSD Cardiac | MRI | Thorax | 1 | 20 | http://medicaldecathlon.com/ | | MSD Colon | CT | Abdomen | 1 | 126 | http://medicaldecathlon.com/ | | MSD HepaticVessel | CT | Abdomen | 2 | 303 | http://medicaldecathlon.com/ | | MSD Hippocampus | MRI | Brain | 3 | 260 | http://medicaldecathlon.com/ | | MSD Liver | CT | Abdomen | 2 | 131 | http://medicaldecathlon.com/ | | MSD Lung | CT | Thorax | 1 | 63 | http://medicaldecathlon.com/ | | MSD Pancreas | CT | Abdomen | 2 | 281 | http://medicaldecathlon.com/ | | MSD Prostate | MRI | Pelvis | 2 | 64 | http://medicaldecathlon.com/ | | MSD Spleen | CT | Abdomen | 1 | 41 | http://medicaldecathlon.com/ | | MyoPS2020 | MRI | Thorax | 6 | 135 | https://mega.nz/folder/BRdnDISQ#FnCg9ykPlTWYe5hrRZxi-w | | NSCLC | CT | Thorax | 2 | 85 | https://wiki.cancerimagingarchive.net/pages/viewpage.action?pageId=68551327 | | Pancreas CT | CT | Abdomen | 1 | 80 | https://wiki.cancerimagingarchive.net/display/public/pancreas-ct | | Parse2022 | CT | Thorax | 1 | 100 | https://parse2022.grand-challenge.org/Dataset/ | | PDDCA | CT | Head and Neck | 12 | 48 | https://www.imagenglab.com/newsite/pddca/ | | PROMISE12 | MRI | Pelvis | 1 | 50 | https://promise12.grand-challenge.org/Details/ | | SEGA | CT | Whole Body | 1 | 56 | https://multicenteraorta.grand-challenge.org/data/ | | SegRap2023 Task1 | CT | Head and Neck | 61 | 120 | https://segrap2023.grand-challenge.org/ | | SegRap2023 Task2 | CT | Thorax | 2 | 120 | https://segrap2023.grand-challenge.org/ | | SegTHOR | CT | Thorax | 4 | 40 | https://competitions.codalab.org/competitions/21145#learnthedetails | | SKI10 | CT | Upper Limb | 4 | 99 | https://ambellan.de/sharing/QjrntLwah | | SLIVER07 | CT | Abdomen | 1 | 20 | https://sliver07.grand-challenge.org/ | | ToothFairy | MRI | Head and Neck | 4 | 153 | https://ditto.ing.unimore.it/toothfairy/ | | TotalSegmentator Cardiac | CT | Whole Body | 17 | 1202 | https://zenodo.org/record/6802614 | | TotalSegmentator Muscles | CT | Whole Body | 31 | 1202 | https://zenodo.org/record/6802614 | | TotalSegmentator Organs | CT | Whole Body | 24 | 1202 | https://zenodo.org/record/6802614 | | TotalSegmentator Ribs | CT | Whole Body | 39 | 1202 | https://zenodo.org/record/6802614 | | TotalSegmentator Vertebrae| CT | Whole Body | 29 | 1202 | https://zenodo.org/record/6802614 | | TotalSegmentator V2 | CT | Whole Body | 24 | 1202 | https://zenodo.org/record/6802614 | | VerSe | CT | Whole Body | 29 | 96 | https://github.com/anjany/verse | | WMH | MRI | Brain | 1 | 170 | https://wmh.isi.uu.nl/ | | WORD | CT | Abdomen | 18 | 150 | https://github.com/HiLab-git/WORD | | AbdomenAtlas * | CT | Abdomen | 29 | 9262 | https://huggingface.co/datasets/AbdomenAtlas/AbdomenAtlas1.0Mini | | LiQA * | MRI | Abdomen | 1 | 30 | https://www.zmic.org.cn/care2024/track3/ | | Adrenal ACC Ki67 * | CT | Abdomen | 1 | 29 | https://www.cancerimagingarchive.net/collection/adrenal-acc-ki67-seg/ | | ATM22 * | CT | Thorax | 1 | 279 | https://paperswithcode.com/dataset/atm22 | | RibFrac * | CT | Thorax | 1 | 500 | https://ribfrac.grand-challenge.org/ | | LIDC-IDRI * | CT | Thorax | 1 | 2236 | https://www.cancerimagingarchive.net/collection/lidc-idri/ | | LNQ2023 * | CT | Thorax | 1 | 393 | https://lnq2023.grand-challenge.org/ |

Step 2: Preprocess datasets

For each dataset, we need to find all the image and mask pairs, and another 5 basic information: dataset name, modality, label name, patient ids (to split train-test set) and official split (if provided). \ In processor.py, we customize the process procedure for each dataset, to generate a jsonl file including these information for each sample. \ Take AbdomenCT1K for instance, you need to run the following command: python processor.py \ --dataset_name AbdomenCT1K \ --root_path 'SAT-DS/data/nii/AbdomenCT-1K' \ --jsonl_dir 'SAT-DS/data/jsonl' root_path should be where you download and place the data, jsonl_dir should be where you plan to place the jsonl files. \ ⚠️ Note the dataset_name and the name in the table might not be exactly the same. For specific details, please refer to each process function in processor.py. \ After process, each sample in jsonl files would be like: { 'image' :"SAT-DS/data/nii/AbdomenCT-1K/Images/Case_00558_0000.nii.gz", 'mask': "SAT-DS/data/nii/AbdomenCT-1K/Masks/Case_00558.nii.gz", 'label': ["liver", "kidney", "spleen", "pancreas"], 'modality': 'CT', 'dataset': 'AbdomenCT1K, 'official_split': 'unknown', 'patient_id': 'Case_00558_0000.nii.gz', } Note that in this step, we may convert the image and mask into new nifiti files for some datasets, such as TotalSegmentator and so on. So it may take some time.

Shortcut to skip Step 1 and 2: Download the preprocessed and packaged data for immediate use

We offer shortcut download links of 42 datasets in dropbox. All these datasets are preprocessed and packaged in advance. Download the datasets you need and unzip them in data/nii, each dataset is paired with a jsonl file in data/jsonl.

Step 3: Load data with unified normalization

With the generated jsonl file, a dataset is now ready to be used. \ However, when mixing all the datasets to train a universal segmentation model, we need to apply normalization on the image intensity, orientation, spacing across all the datasets, and adjust labels if necessary. \ We realize this by customizing the load script for each dataset in loader.py, this is a simple demo how to use it in your code: ``` from loader import Loader_Wrapper

loader = Loader_Wrapper()

load samples from jsonl

with open('SAT-DS/data/jsonl', 'r') as f: lines = f.readlines() data = [json.loads(line) for line in lines]

load a sample

for sample in data: batch = getattr(loader, funcname)(sample) imgtensor, mcmask, textls, modality, imagepath, maskpath = batch **For each sample, whatever the dataset it comes from, the loader will give output in a normalized format**: imgtensor # tensor with shape (1, H, W, D) mcmask # binary tensor with shape (N, H, W, D), one channel for each class; textls # a list of N class name; modality # MRI, CT or PET; imagepath # path to the loaded mask file; maskpath # path to the loaded imag file; `` ⚠️ Note that we may merge and adjust labels here in the loader. Therefore, the outputtextlsmay be different from thelabelyou see in the input jsonl file. Here is an case where we mergeleft kidney' and right kidney for a new label kidney when loading examples from CHAOSMRI: kidney = mask[1] + mask[2] mask = torch.cat((mask, kidney.unsqueeze(0)), dim=0) labels.append("kidney") And here is another case where we adjust the annotation of kidney by integrating the annotation of kidney tumor and kidney cyst: ``` mcmasks[0] += mcmasks[1] mcmasks[0] += mc_masks[2] ```

We also offer the shortcut to visualize and check any sample in any dataset after normalization. For example, to visualize the first sample in AbdomenCT1K.jsonl, just run the following command: python loader.py \ --visualization_dir 'SAT-DS/data/visualization' \ --path2jsonl 'SAT-DS/data/jsonl/AbdomenCT1K.jsonl' \ --i 0

(Optional) Step 4: Convert to npy files

For convenience, before training SAT, we normalize all the data according to step 3, and convert the images and segmentation masks to npy files. If you try to use our training code, run this command for each dataset: python convert_to_npy.py \ --jsonl2load 'SAT-DS/data/jsonl/AbdomenCT1K.jsonl' \ --jsonl2save 'SAT-DS/data/jsonl/AbdomenCT1K.jsonl' The converted npy files will be saved in preprocessed_npy/dataset_name, and some new information will be added to the jsonl file for connivence to load the npy files.

(Optional) Step 5: Split train and test set

We offer the train-test split used in our paper for each dataset in json files. To follow our split and benchmark your method, simply run this command: python train_test_split.py \ --jsonl2split 'SAT-DS/data/jsonl/AbdomenCT1K.jsonl' \ --train_jsonl 'SAT-DS/data/trainset_jsonl/AbdomenCT1K.jsonl' \ --test_jsonl 'SAT-DS/data/testset_jsonl/AbdomenCT1K.jsonl' \ --split_json 'SAT-DS/data/split_json/AbdomenCT1K.json' This will split the jsonl file into train and test.

Or, if you want to re-split them, just customize your split by identifying the patient_id in the json file (patient_id of each sample can be found in jsonl file of each dataset): {'train':['train_patient_id1', ...], 'test':['test_patient_id1', ...]}

(Optional) Step 6: DIY your data collection

You may want to customize the dataset collection in training your model, simply merge the train jsonls of the data you want to involve. For example, merge the jsonls for all the 72 datasets into train.jsonl, and you can use them together to train SAT, using our training code in this repo.

Similarly, you can customize a benchmark with arbitrary datasets you want by merging the test jsonls.

Citation

If you use this code for your research or project, please cite: @misc{zhao2025largevocabularysegmentationmedicalimages, title={Large-Vocabulary Segmentation for Medical Images with Text Prompts}, author={Ziheng Zhao and Yao Zhang and Chaoyi Wu and Xiaoman Zhang and Xiao Zhou and Ya Zhang and Yanfeng Wang and Weidi Xie}, year={2025}, eprint={2312.17183}, archivePrefix={arXiv}, primaryClass={eess.IV}, url={https://arxiv.org/abs/2312.17183}, } And if you use any of these datasets in SAT-DS, please cite the corresponding papers. A summerized citation information can be found in citation.bib .

Owner

  • Login: zhaoziheng
  • Kind: user

Citation (citation.bib)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% Datasets
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

@article{AbdomenCT1K,
  title={Abdomenct-1k: Is abdominal organ segmentation a solved problem?},
  author={Ma, Jun and Zhang, Yao and Gu, Song and Zhu, Cheng and Ge, Cheng and Zhang, Yichi and An, Xingle and Wang, Congcong and Wang, Qiyuan and Liu, Xin and others},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  volume={44},
  number={10},
  pages={6695--6714},
  year={2021},
  publisher={IEEE}
}

@article{Abdomenatlas,
  title={Abdomenatlas-8k: Annotating 8,000 CT volumes for multi-organ segmentation in three weeks},
  author={Qu, Chongyu and Zhang, Tiezheng and Qiao, Hualin and Tang, Yucheng and Yuille, Alan L and Zhou, Zongwei and others},
  journal={Advances in Neural Information Processing Systems},
  volume={36},
  year={2023}
}

@article{ACDC,
  title={Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved?},
  author={Bernard, Olivier and Lalande, Alain and Zotti, Clement and Cervenansky, Frederick and Yang, Xin and Heng, Pheng-Ann and Cetin, Irem and Lekadir, Karim and Camara, Oscar and Ballester, Miguel Angel Gonzalez and others},
  journal={IEEE Transactions on Medical Imaging},
  volume={37},
  number={11},
  pages={2514--2525},
  year={2018},
  publisher={IEEE}
}

@article{AMOS22,
  title={AMOS: A Large-Scale Abdominal Multi-Organ Benchmark for Versatile Medical Image Segmentation},
  author={Ji, Yuanfeng and Bai, Haotian and Yang, Jie and Ge, Chongjian and Zhu, Ye and Zhang, Ruimao and Li, Zhen and Zhang, Lingyan and Ma, Wanling and Wan, Xiang and others},
  journal={arXiv preprint arXiv:2206.08023},
  year={2022}
}

@article{ATLAS,
  title={A Tumour and Liver Automatic Segmentation (ATLAS) Dataset on Contrast-Enhanced Magnetic Resonance Imaging for Hepatocellular Carcinoma},
  author={Quinton, F{\'e}lix and Popoff, Romain and Presles, Beno{\^\i}t and Leclerc, Sarah and Meriaudeau, Fabrice and Nodari, Guillaume and Lopez, Olivier and Pellegrinelli, Julie and Chevallier, Olivier and Ginhac, Dominique and others},
  journal={Data},
  volume={8},
  number={5},
  pages={79},
  year={2023},
  publisher={MDPI}
}

@article{ATLASR2,
  title={A large, curated, open-source stroke neuroimaging dataset to improve lesion segmentation algorithms},
  author={Liew, Sook-Lei and Lo, Bethany P and Donnelly, Miranda R and Zavaliangos-Petropulu, Artemis and Jeong, Jessica N and Barisano, Giuseppe and Hutton, Alexandre and Simon, Julia P and Juliano, Julia M and Suri, Anisha and others},
  journal={Scientific data},
  volume={9},
  number={1},
  pages={320},
  year={2022},
  publisher={Nature Publishing Group UK London}
}

@article{autoPET,
  title={A whole-body FDG-PET/CT Dataset with manually annotated Tumor Lesions},
  author={Gatidis, Sergios and Hepp, Tobias and Fr{\"u}h, Marcel and La Foug{\`e}re, Christian and Nikolaou, Konstantin and Pfannenberg, Christina and Sch{\"o}lkopf, Bernhard and K{\"u}stner, Thomas and Cyran, Clemens and Rubin, Daniel},
  journal={Scientific Data},
  volume={9},
  number={1},
  pages={601},
  year={2022},
  publisher={Nature Publishing Group UK London}
}

@article{Brain_Atlas,
  title={Construction of a consistent high-definition spatio-temporal atlas of the developing brain using adaptive kernel regression},
  author={Serag, Ahmed and Aljabar, Paul and Ball, Gareth and Counsell, Serena J and Boardman, James P and Rutherford, Mary A and Edwards, A David and Hajnal, Joseph V and Rueckert, Daniel},
  journal={Neuroimage},
  volume={59},
  number={3},
  pages={2255--2265},
  year={2012},
  publisher={Elsevier}
}

@article{BrainPTM,
  title={Neural segmentation of seeding ROIs (sROIs) for pre-surgical brain tractography},
  author={Avital, Itzik and Nelkenbaum, Ilya and Tsarfaty, Galia and Konen, Eli and Kiryati, Nahum and Mayer, Arnaldo},
  journal={IEEE Transactions on Medical Imaging},
  volume={39},
  number={5},
  pages={1655--1667},
  year={2019},
  publisher={IEEE}
}

@article{BraTS2021,
  title={The rsna-asnr-miccai brats 2021 benchmark on brain tumor segmentation and radiogenomic classification},
  author={Baid, Ujjwal and Ghodasara, Satyam and Mohan, Suyash and Bilello, Michel and Calabrese, Evan and Colak, Errol and Farahani, Keyvan and Kalpathy-Cramer, Jayashree and Kitamura, Felipe C and Pati, Sarthak and others},
  journal={arXiv preprint arXiv:2107.02314},
  year={2021}
}

@article{BraTS2023GLI,
  title={The multimodal brain tumor image segmentation benchmark (BRATS)},
  author={Menze, Bjoern H and Jakab, Andras and Bauer, Stefan and Kalpathy-Cramer, Jayashree and Farahani, Keyvan and Kirby, Justin and Burren, Yuliya and Porz, Nicole and Slotboom, Johannes and Wiest, Roland and others},
  journal={IEEE transactions on medical imaging},
  volume={34},
  number={10},
  pages={1993--2024},
  year={2014},
  publisher={IEEE}
}

@misc{BraTS2023MEN,
      title={The ASNR-MICCAI Brain Tumor Segmentation (BraTS) Challenge 2023: Intracranial Meningioma}, 
      author={Dominic LaBella and Maruf Adewole and Michelle Alonso-Basanta and Talissa Altes and Syed Muhammad Anwar and Ujjwal Baid and Timothy Bergquist and Radhika Bhalerao and Sully Chen and Verena Chung and Gian-Marco Conte and Farouk Dako and James Eddy and Ivan Ezhov and Devon Godfrey and Fathi Hilal and Ariana Familiar and Keyvan Farahani and Juan Eugenio Iglesias and Zhifan Jiang and Elaine Johanson and Anahita Fathi Kazerooni and Collin Kent and John Kirkpatrick and Florian Kofler and Koen Van Leemput and Hongwei Bran Li and Xinyang Liu and Aria Mahtabfar and Shan McBurney-Lin and Ryan McLean and Zeke Meier and Ahmed W Moawad and John Mongan and Pierre Nedelec and Maxence Pajot and Marie Piraud and Arif Rashid and Zachary Reitman and Russell Takeshi Shinohara and Yury Velichko and Chunhao Wang and Pranav Warman and Walter Wiggins and Mariam Aboian and Jake Albrecht and Udunna Anazodo and Spyridon Bakas and Adam Flanders and Anastasia Janas and Goldey Khanna and Marius George Linguraru and Bjoern Menze and Ayman Nada and Andreas M Rauschecker and Jeff Rudie and Nourel Hoda Tahon and Javier Villanueva-Meyer and Benedikt Wiestler and Evan Calabrese},
      year={2023},
      eprint={2305.07642},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@misc{BraTS2023MET,
      title={The Brain Tumor Segmentation (BraTS-METS) Challenge 2023: Brain Metastasis Segmentation on Pre-treatment MRI}, 
      author={Ahmed W. Moawad and Anastasia Janas and Ujjwal Baid and Divya Ramakrishnan and Leon Jekel and Kiril Krantchev and Harrison Moy and Rachit Saluja and Klara Osenberg and Klara Wilms and Manpreet Kaur and Arman Avesta and Gabriel Cassinelli Pedersen and Nazanin Maleki and Mahdi Salimi and Sarah Merkaj and Marc von Reppert and Niklas Tillmans and Jan Lost and Khaled Bousabarah and Wolfgang Holler and MingDe Lin and Malte Westerhoff and Ryan Maresca and Katherine E. Link and Nourel hoda Tahon and Daniel Marcus and Aristeidis Sotiras and Pamela LaMontagne and Strajit Chakrabarty and Oleg Teytelboym and Ayda Youssef and Ayaman Nada and Yuri S. Velichko and Nicolo Gennaro and Connectome Students and Group of Annotators and Justin Cramer and Derek R. Johnson and Benjamin Y. M. Kwan and Boyan Petrovic and Satya N. Patro and Lei Wu and Tiffany So and Gerry Thompson and Anthony Kam and Gloria Guzman Perez-Carrillo and Neil Lall and Group of Approvers and Jake Albrecht and Udunna Anazodo and Marius George Lingaru and Bjoern H Menze and Benedikt Wiestler and Maruf Adewole and Syed Muhammad Anwar and Dominic Labella and Hongwei Bran Li and Juan Eugenio Iglesias and Keyvan Farahani and James Eddy and Timothy Bergquist and Verena Chung and Russel Takeshi Shinohara and Farouk Dako and Walter Wiggins and Zachary Reitman and Chunhao Wang and Xinyang Liu and Zhifan Jiang and Koen Van Leemput and Marie Piraud and Ivan Ezhov and Elaine Johanson and Zeke Meier and Ariana Familiar and Anahita Fathi Kazerooni and Florian Kofler and Evan Calabrese and Sanjay Aneja and Veronica Chiang and Ichiro Ikuta and Umber Shafique and Fatima Memon and Gian Marco Conte and Spyridon Bakas and Jeffrey Rudie and Mariam Aboian},
      year={2023},
      eprint={2306.00838},
      archivePrefix={arXiv},
      primaryClass={q-bio.OT}
}

@misc{BraTS2023PED,
      title={The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs)}, 
      author={Anahita Fathi Kazerooni and Nastaran Khalili and Xinyang Liu and Debanjan Haldar and Zhifan Jiang and Syed Muhammed Anwar and Jake Albrecht and Maruf Adewole and Udunna Anazodo and Hannah Anderson and Sina Bagheri and Ujjwal Baid and Timothy Bergquist and Austin J. Borja and Evan Calabrese and Verena Chung and Gian-Marco Conte and Farouk Dako and James Eddy and Ivan Ezhov and Ariana Familiar and Keyvan Farahani and Shuvanjan Haldar and Juan Eugenio Iglesias and Anastasia Janas and Elaine Johansen and Blaise V Jones and Florian Kofler and Dominic LaBella and Hollie Anne Lai and Koen Van Leemput and Hongwei Bran Li and Nazanin Maleki and Aaron S McAllister and Zeke Meier and Bjoern Menze and Ahmed W Moawad and Khanak K Nandolia and Julija Pavaine and Marie Piraud and Tina Poussaint and Sanjay P Prabhu and Zachary Reitman and Andres Rodriguez and Jeffrey D Rudie and Ibraheem Salman Shaikh and Lubdha M. Shah and Nakul Sheth and Russel Taki Shinohara and Wenxin Tu and Karthik Viswanathan and Chunhao Wang and Jeffrey B Ware and Benedikt Wiestler and Walter Wiggins and Anna Zapaishchykova and Mariam Aboian and Miriam Bornhorst and Peter de Blank and Michelle Deutsch and Maryam Fouladi and Lindsey Hoffman and Benjamin Kann and Margot Lazow and Leonie Mikael and Ali Nabavizadeh and Roger Packer and Adam Resnick and Brian Rood and Arastoo Vossough and Spyridon Bakas and Marius George Linguraru},
      year={2023},
      eprint={2305.17033},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}

@misc{BraTS2023SSA,
      title={The Brain Tumor Segmentation (BraTS) Challenge 2023: Glioma Segmentation in Sub-Saharan Africa Patient Population (BraTS-Africa)}, 
      author={Maruf Adewole and Jeffrey D. Rudie and Anu Gbadamosi and Oluyemisi Toyobo and Confidence Raymond and Dong Zhang and Olubukola Omidiji and Rachel Akinola and Mohammad Abba Suwaid and Adaobi Emegoakor and Nancy Ojo and Kenneth Aguh and Chinasa Kalaiwo and Gabriel Babatunde and Afolabi Ogunleye and Yewande Gbadamosi and Kator Iorpagher and Evan Calabrese and Mariam Aboian and Marius Linguraru and Jake Albrecht and Benedikt Wiestler and Florian Kofler and Anastasia Janas and Dominic LaBella and Anahita Fathi Kzerooni and Hongwei Bran Li and Juan Eugenio Iglesias and Keyvan Farahani and James Eddy and Timothy Bergquist and Verena Chung and Russell Takeshi Shinohara and Walter Wiggins and Zachary Reitman and Chunhao Wang and Xinyang Liu and Zhifan Jiang and Ariana Familiar and Koen Van Leemput and Christina Bukas and Maire Piraud and Gian-Marco Conte and Elaine Johansson and Zeke Meier and Bjoern H Menze and Ujjwal Baid and Spyridon Bakas and Farouk Dako and Abiodun Fatade and Udunna C Anazodo},
      year={2023},
      eprint={2305.19369},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}

@inproceedings{BTCVAbdomen,
  title={Miccai multi-atlas labeling beyond the cranial vault--workshop and challenge},
  author={Landman, Bennett and Xu, Zhoubing and Igelsias, J and Styner, Martin and Langerak, T and Klein, Arno},
  booktitle={Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge},
  volume={5},
  pages={12},
  year={2015}
}

@inproceedings{BTCVCervix,
  title={Miccai multi-atlas labeling beyond the cranial vault--workshop and challenge},
  author={Landman, Bennett and Xu, Zhoubing and Igelsias, J and Styner, Martin and Langerak, T and Klein, Arno},
  booktitle={Proc. MICCAI Multi-Atlas Labeling Beyond Cranial Vault—Workshop Challenge},
  volume={5},
  pages={12},
  year={2015}
}

@article{CHAOS,
  title={CHAOS challenge-combined (CT-MR) healthy abdominal organ segmentation},
  author={Kavur, A Emre and Gezer, N Sinem and Bar{\i}{\c{s}}, Mustafa and Aslan, Sinem and Conze, Pierre-Henri and Groza, Vladimir and Pham, Duc Duy and Chatterjee, Soumick and Ernst, Philipp and {\"O}zkan, Sava{\c{s}} and others},
  journal={Medical Image Analysis},
  volume={69},
  pages={101950},
  year={2021},
  publisher={Elsevier}
}

@misc{CMRxMotion,
      title={The Extreme Cardiac MRI Analysis Challenge under Respiratory Motion (CMRxMotion)}, 
      author={Shuo Wang and Chen Qin and Chengyan Wang and Kang Wang and Haoran Wang and Chen Chen and Cheng Ouyang and Xutong Kuang and Chengliang Dai and Yuanhan Mo and Zhang Shi and Chenchen Dai and Xinrong Chen and He Wang and Wenjia Bai},
      year={2022},
      eprint={2210.06385},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}

@inproceedings{Couinaud,
  title={Automatic couinaud segmentation from CT volumes on liver using GLC-UNet},
  author={Tian, Jiang and Liu, Li and Shi, Zhongchao and Xu, Feiyu},
  booktitle={International Workshop on Machine Learning in Medical Imaging},
  pages={274--282},
  year={2019},
  organization={Springer}
}

@article{COVID19,
  title={Toward data-efficient learning: A benchmark for COVID-19 CT lung and infection segmentation},
  author={Ma, Jun and Wang, Yixin and An, Xingle and Ge, Cheng and Yu, Ziqi and Chen, Jianan and Zhu, Qiongjie and Dong, Guoqiang and He, Jian and He, Zhiqiang and others},
  journal={Medical physics},
  volume={48},
  number={3},
  pages={1197--1210},
  year={2021},
  publisher={Wiley Online Library}
}

@article{CrossMoDA2021,
title = {CrossMoDA 2021 challenge: Benchmark of cross-modality domain adaptation techniques for vestibular schwannoma and cochlea segmentation},
journal = {Medical Image Analysis},
volume = {83},
pages = {102628},
year = {2023},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2022.102628},
url = {https://www.sciencedirect.com/science/article/pii/S1361841522002560},
author = {Reuben Dorent and Aaron Kujawa and Marina Ivory and Spyridon Bakas and Nicola Rieke and Samuel Joutard and Ben Glocker and Jorge Cardoso and Marc Modat and Kayhan Batmanghelich and Arseniy Belkov and Maria Baldeon Calisto and Jae Won Choi and Benoit M. Dawant and Hexin Dong and Sergio Escalera and Yubo Fan and Lasse Hansen and Mattias P. Heinrich and Smriti Joshi and Victoriya Kashtanova and Hyeon Gyu Kim and Satoshi Kondo and Christian N. Kruse and Susana K. Lai-Yuen and Hao Li and Han Liu and Buntheng Ly and Ipek Oguz and Hyungseob Shin and Boris Shirokikh and Zixian Su and Guotai Wang and Jianghao Wu and Yanwu Xu and Kai Yao and Li Zhang and Sébastien Ourselin and Jonathan Shapey and Tom Vercauteren},
}

@article{CTORG,
  title={CT-ORG, a new dataset for multiple organ segmentation in computed tomography},
  author={Rister, Blaine and Yi, Darvin and Shivakumar, Kaushik and Nobashi, Tomomi and Rubin, Daniel L},
  journal={Scientific Data},
  volume={7},
  number={1},
  pages={381},
  year={2020},
  publisher={Nature Publishing Group UK London}
}

@article{CTPelvic1K,
  title = {Deep learning to segment pelvic bones: large-scale CT datasets and baseline models},
  author = {Liu, Pengbo and Han, Hu and Du, Yuanqi and Zhu, Heqin and Li, Yinhao and Gu, Feng and Xiao, Honghu and Li, Jun and Zhao, Chunpeng and Xiao, Li and Wu, Xinbao and Zhou, S. Kevin},
  journal = {International Journal of Computer Assisted Radiology and Surgery},
  volume = {16},
  number = {5},
  year = {2021},
  pages = {749},
  doi = {10.1007/s11548-021-02363-8},
  abstract = {Pelvic bone segmentation in CT has always been an essential step in clinical diagnosis and surgery planning of pelvic bone diseases. Existing methods for pelvic bone segmentation are either hand-crafted or semi-automatic and achieve limited accuracy when dealing with image appearance variations due to the multi-site domain shift, the presence of contrasted vessels, coprolith and chyme, bone fractures, low dose, metal artifacts, etc. Due to the lack of a large-scale pelvic CT dataset with annotations, deep learning methods are not fully explored.},
  url = {https://doi.org/10.1007/s11548-021-02363-8},
}

@article{DAPAtlas,
  title={Towards unifying anatomy segmentation: automated generation of a full-body CT dataset via knowledge aggregation and anatomical guidelines},
  author={Jaus, Alexander and Seibold, Constantin and Hermann, Kelsey and Walter, Alexandra and Giske, Kristina and Haubold, Johannes and Kleesiek, Jens and Stiefelhagen, Rainer},
  journal={arXiv preprint arXiv:2307.13375},
  year={2023}
}

@article{FeTA2022,
  title={An automatic multi-tissue human fetal brain segmentation benchmark using the fetal tissue annotation dataset},
  author={Payette, Kelly and de Dumast, Priscille and Kebiri, Hamza and Ezhov, Ivan and Paetzold, Johannes C and Shit, Suprosanna and Iqbal, Asim and Khan, Romesa and Kottke, Raimund and Grehten, Patrice and others},
  journal={Scientific data},
  volume={8},
  number={1},
  pages={167},
  year={2021},
  publisher={Nature Publishing Group UK London}
}

@article{FLARE22,
    author = {Jun Ma and Yao Zhang and Song Gu and Cheng Ge and Shihao Ma and Adamo Young and Cheng Zhu and Kangkang Meng and Xin Yang and Ziyan Huang and Fan Zhang and Wentao Liu and YuanKe Pan and Shoujin Huang and Jiacheng Wang and Mingze Sun and Weixin Xu and Dengqiang Jia and Jae Won Choi and Natália Alves and Bram de Wilde and Gregor Koehler and Yajun Wu and Manuel Wiesenfarth and Qiongjie Zhu and Guoqiang Dong and Jian He and the FLARE Challenge Consortium and Bo Wang},
    title = {Unleashing the Strengths of Unlabeled Data in Pan-cancer Abdominal Organ Quantification: the FLARE22 Challenge},
    year = {2023},
    journal = {arXiv preprint arXiv:2308.05862},
}

@article{FUMPE,
    title={A new dataset of computed-tomography angiography images for computer-aided detection of pulmonary embolism},
    author={Mojtaba Masoudi and Hamid-Reza Pourreza and Mahdi Saadatmand-Tarzjan and Noushin Eftekhari and Fateme Shafiee Zargar and Masoud Pezeshki Rad},
    journal={Scientific Data},
    volume={5},
    year={2018},
    publisher={Nature Publishing Group}
}

@article{HANSeg,
  title={HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset},
  author={Podobnik, Ga{\v{s}}per and Strojan, Primo{\v{z}} and Peterlin, Primo{\v{z}} and Ibragimov, Bulat and Vrtovec, Toma{\v{z}}},
  journal={Medical physics},
  volume={50},
  number={3},
  pages={1917--1927},
  year={2023},
  publisher={Wiley Online Library}
}

@InCollection{HECTOR2022,
  author    = {Andrearczyk, V. and Oreiller, V. and Hatt, M. and Depeursinge, A.},
  title     = {Overview of the HECKTOR Challenge at MICCAI 2022: Automatic Head and Neck Tumor Segmentation and Outcome Prediction in PET/CT},
  booktitle = {Head and Neck Tumor Segmentation and Outcome Prediction. HECKTOR 2022. Lecture Notes in Computer Science},
  year      = {2023},
  volume    = {13626},
  publisher = {Springer},
  address   = {Cham},
  doi       = {https://doi.org/10.1007/978-3-031-27420-6_1},
  editor    = {Andrearczyk, V. and Oreiller, V. and Hatt, M. and Depeursinge, A.}
}

@article{INSTANCE,
  title={The state-of-the-art 3D anisotropic intracranial hemorrhage segmentation on non-contrast head CT: The INSTANCE challenge},
  author={Li, Xiangyu and Luo, Gongning and Wang, Kuanquan and Wang, Hongyu and Liu, Jun and Liang, Xinjie and Jiang, Jie and Song, Zhenghao and Zheng, Chunyue and Chi, Haokai and others},
  journal={arXiv preprint arXiv:2301.03281},
  year={2023}
}

@article{ISLES2022,
  title={ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset},
  author={Hernandez Petzsche, Moritz R and de la Rosa, Ezequiel and Hanning, Uta and Wiest, Roland and Valenzuela, Waldo and Reyes, Mauricio and Meyer, Maria and Liew, Sook-Lei and Kofler, Florian and Ezhov, Ivan and others},
  journal={Scientific data},
  volume={9},
  number={1},
  pages={762},
  year={2022},
  publisher={Nature Publishing Group UK London}
}

@article{KITS19,
  title={The state of the art in kidney and kidney tumor segmentation in contrast-enhanced CT imaging: Results of the KiTS19 Challenge},
  author={Heller, Nicholas and Isensee, Fabian and Maier-Hein, Klaus H and Hou, Xiaoshuai and Xie, Chunmei and Li, Fengyi and Nan, Yang and Mu, Guangrui and Lin, Zhiyong and Han, Miofei and others},
  journal={Medical Image Analysis},
  pages={101821},
  year={2020},
  publisher={Elsevier}
}

@article{KITS21,
  title={The KiTS21 Challenge: Automatic segmentation of kidneys, renal tumors, and renal cysts in corticomedullary-phase CT},
  author={Heller, Nicholas and Isensee, Fabian and Trofimova, Dasha and Tejpaul, Resha and Zhao, Zhongchen and Chen, Huai and Wang, Lisheng and Golts, Alex and Khapun, Daniel and Shats, Daniel and others},
  journal={arXiv preprint arXiv:2307.01984},
  year={2023}
}

@misc{KiTS23,
  title = {The 2023 Kidney and Kidney Tumor Segmentation Challenge (KiTS23)},
  howpublished = {\url{https://kits-challenge.org/kits23/}},
  note = {Accessed: 2024-04-07},
  organization = {University of Minnesota, Helmholtz Imaging at the German Cancer Research Center (DKFZ), Cleveland Clinic's Urologic Cancer Program},
  year = {2023}
}

@article{KiPA22,
  title={Meta grayscale adaptive network for 3D integrated renal structures segmentation},
  author={He, Yuting and Yang, Guanyu and Yang, Jian and Ge, Rongjun and Kong, Youyong and Zhu, Xiaomei and Zhang, Shaobo and Shao, Pengfei and Shu, Huazhong and Dillenseger, Jean-Louis and others},
  journal={Medical image analysis},
  volume={71},
  pages={102055},
  year={2021},
  publisher={Elsevier}
}

@article{LAScarQS2022,
  title={AtrialJSQnet: a new framework for joint segmentation and quantification of left atrium and scars incorporating spatial and shape information},
  author={Li, Lei and Zimmer, Veronika A and Schnabel, Julia A and Zhuang, Xiahai},
  journal={Medical image analysis},
  volume={76},
  pages={102303},
  year={2022},
  publisher={Elsevier}
}

@article{LNDb,
  title={LNDb challenge on automatic lung cancer patient management},
  author={Pedrosa, Jo{\~a}o and Aresta, Guilherme and Ferreira, Carlos and Atwal, Gurraj and Phoulady, Hady Ahmady and Chen, Xiaoyu and Chen, Rongzhen and Li, Jiaoliang and Wang, Liansheng and Galdran, Adrian and others},
  journal={Medical image analysis},
  volume={70},
  pages={102027},
  year={2021},
  publisher={Elsevier}
}

@article{LUNA16,
  title={Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: the LUNA16 challenge},
  author={Setio, Arnaud Arindra Adiyoso and Traverso, Alberto and De Bel, Thomas and Berens, Moira SN and Van Den Bogaard, Cas and Cerello, Piergiorgio and Chen, Hao and Dou, Qi and Fantacci, Maria Evelina and Geurts, Bram and others},
  journal={Medical image analysis},
  volume={42},
  pages={1--13},
  year={2017},
  publisher={Elsevier}
}

@article{MMWHS,
  title={Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI},
  author={Zhuang, Xiahai and Shen, Juan},
  journal={Medical image analysis},
  volume={31},
  pages={77--87},
  year={2016},
  publisher={Elsevier}
}

@article{MRSpineSeg,
  title={SpineParseNet: spine parsing for volumetric MR image by a two-stage segmentation framework with semantic image representation},
  author={Pang, Shumao and Pang, Chunlan and Zhao, Lei and Chen, Yangfan and Su, Zhihai and Zhou, Yujia and Huang, Meiyan and Yang, Wei and Lu, Hai and Feng, Qianjin},
  journal={IEEE Transactions on Medical Imaging},
  volume={40},
  number={1},
  pages={262--273},
  year={2020},
  publisher={IEEE}
}

@article{MSD,
  title={The medical segmentation decathlon},
  author={Antonelli, Michela and Reinke, Annika and Bakas, Spyridon and Farahani, Keyvan and Kopp-Schneider, Annette and Landman, Bennett A and Litjens, Geert and Menze, Bjoern and Ronneberger, Olaf and Summers, Ronald M and others},
  journal={Nature communications},
  volume={13},
  number={1},
  pages={4128},
  year={2022},
  publisher={Nature Publishing Group UK London}
}

@article{MyoPS2020,
title = {MyoPS-Net: Myocardial pathology segmentation with flexible combination of multi-sequence CMR images},
journal = {Medical Image Analysis},
volume = {84},
pages = {102694},
year = {2023},
issn = {1361-8415},
doi = {https://doi.org/10.1016/j.media.2022.102694},
author = {Junyi Qiu and Lei Li and Sihan Wang and Ke Zhang and Yinyin Chen and Shan Yang and Xiahai Zhuang},
}

@article{NSCLC,
  title={A radiogenomic dataset of non-small cell lung cancer},
  author={Bakr, Shaimaa and Gevaert, Olivier and Echegaray, Sebastian and Ayers, Kelsey and Zhou, Mu and Shafiq, Majid and Zheng, Hong and Benson, Jalen Anthony and Zhang, Weiruo and Leung, Ann NC and others},
  journal={Scientific Data},
  volume={5},
  number={1},
  pages={1--9},
  year={2018},
  publisher={Nature Publishing Group}
}

@data{PancreasCT,
  author = {Roth, H. and Farag, A. and Turkbey, E. B. and Lu, L. and Liu, J. and Summers, R. M.},
  title = {Data From Pancreas-CT (Version 2)},
  year = {2016},
  publisher = {The Cancer Imaging Archive},
  version = {2},
  howpublished = {\url{https://doi.org/10.7937/K9/TCIA.2016.tNB1kqBU}}
}

@article{PARSE2022,
  title={Efficient automatic segmentation for multi-level pulmonary arteries: The PARSE challenge},
  author={Luo, Gongning and Wang, Kuanquan and Liu, Jun and Li, Shuo and Liang, Xinjie and Li, Xiangyu and Gan, Shaowei and Wang, Wei and Dong, Suyu and Wang, Wenyi and others},
  journal={arXiv preprint arXiv:2304.03708},
  year={2023}
}

@article{PDDCA,
  title={Evaluation of segmentation methods on head and neck CT: auto-segmentation challenge 2015},
  author={Raudaschl, Patrik F and Zaffino, Paolo and Sharp, Gregory C and Spadea, Maria Francesca and Chen, Antong and Dawant, Benoit M and Albrecht, Thomas and Gass, Tobias and Langguth, Christoph and L{\"u}thi, Marcel and others},
  journal={Medical physics},
  volume={44},
  number={5},
  pages={2020--2036},
  year={2017},
  publisher={Wiley Online Library}
}

@article{PROMISE12,
  title={Evaluation of prostate segmentation algorithms for MRI: the PROMISE12 challenge},
  author={Litjens, Geert and Toth, Robert and Van De Ven, Wendy and Hoeks, Caroline and Kerkstra, Sjoerd and Van Ginneken, Bram and Vincent, Graham and Guillard, Gwenael and Birbeck, Neil and Zhang, Jindang and others},
  journal={Medical Image Analysis},
  volume={18},
  number={2},
  pages={359--373},
  year={2014},
  publisher={Elsevier}
}

@article{SEGA,
  title={AVT: Multicenter aortic vessel tree CTA dataset collection with ground truth segmentation masks},
  author={Radl, Lukas and Jin, Yuan and Pepe, Antonio and Li, Jianning and Gsaxner, Christina and Zhao, Fen-hua and Egger, Jan},
  journal={Data in brief},
  volume={40},
  pages={107801},
  year={2022},
  publisher={Elsevier}
}

@article{SegRap2023,
  title={Segrap2023: A benchmark of organs-at-risk and gross tumor volume segmentation for radiotherapy planning of nasopharyngeal carcinoma},
  author={Luo, Xiangde and Fu, Jia and Zhong, Yunxin and Liu, Shuolin and Han, Bing and Astaraki, Mehdi and Bendazzoli, Simone and Toma-Dasu, Iuliana and Ye, Yiwen and Chen, Ziyang and others},
  journal={arXiv preprint arXiv:2312.09576},
  year={2023}
}

@inproceedings{SegTHOR,
  title={Segthor: Segmentation of thoracic organs at risk in ct images},
  author={Lambert, Zo{\'e} and Petitjean, Caroline and Dubray, Bernard and Kuan, Su},
  booktitle={2020 Tenth International Conference on Image Processing Theory, Tools and Applications (IPTA)},
  pages={1--6},
  year={2020},
  organization={IEEE}
}

@inproceedings{SKI10,
  title={Learning local shape and appearance for segmentation of knee cartilage in 3D MRI},
  author={Lee, Soochahn and Shim, Hackjoon and Park, Sang Hyun and Yun, Il Dong and Lee, Sang Uk},
  booktitle={Medical Image Computing and Computer Assisted Intervention (MICCAI)},
  pages={231--240},
  year={2010}
}

@article{SLIVER07,
  title={Comparison and evaluation of methods for liver segmentation from CT datasets},
  author={Heimann, Tobias and Van Ginneken, Bram and Styner, Martin A and Arzhaeva, Yulia and Aurich, Volker and Bauer, Christian and Beck, Andreas and Becker, Christoph and Beichel, Reinhard and Bekes, Gy{\"o}rgy and others},
  journal={IEEE Transactions on Medical Imaging},
  volume={28},
  number={8},
  pages={1251--1265},
  year={2009},
  publisher={IEEE}
}

@article{ToothFairy,
    author={Cipriano, Marco and Allegretti, Stefano and Bolelli, Federico and Di Bartolomeo, Mattia and Pollastri, Federico and Pellacani, Arrigo and Minafra, Paolo and Anesi, Alexandre and Grana, Costantino},
    title={Deep Segmentation of the Mandibular Canal: a New 3D Annotated Dataset of CBCT Volumes},
    journal={IEEE Access},
    year={2022},
    volume={10},
    pages={11500--11510},
    publisher={IEEE},
    issn={2169-3536},
    doi={10.1109/ACCESS.2022.3144840}
}

@article{Totalsegmentator,
  title={Totalsegmentator: Robust segmentation of 104 anatomic structures in ct images},
  author={Wasserthal, Jakob and Breit, Hanns-Christian and Meyer, Manfred T and Pradella, Maurice and Hinck, Daniel and Sauter, Alexander W and Heye, Tobias and Boll, Daniel T and Cyriac, Joshy and Yang, Shan and others},
  journal={Radiology: Artificial Intelligence},
  volume={5},
  number={5},
  year={2023},
  publisher={Radiological Society of North America}
}

@article{VerSe,
  title={VerSe: A Vertebrae labelling and segmentation benchmark for multi-detector CT images},
  author={Sekuboyina, Anjany and Husseini, Malek E and Bayat, Amirhossein and L{\"o}ffler, Maximilian and Liebl, Hans and Li, Hongwei and Tetteh, Giles and Kuka{\v{c}}ka, Jan and Payer, Christian and {\v{S}}tern, Darko and others},
  journal={Medical image analysis},
  volume={73},
  pages={102166},
  year={2021},
  publisher={Elsevier}
}

@article{WORD,
  title={WORD: A large scale dataset, benchmark and clinical applicable study for abdominal organ segmentation from CT image.},
  author={Luo, X and Liao, W and Xiao, J and Chen, J and Song, T and Zhang, X and Li, K and Metaxas, DN and Wang, G and Zhang, S},
  journal={Medical Image Analysis},
  volume={82},
  pages={102642--102642},
  year={2022}
}

@article{WMH,
  title={Standardized assessment of automatic segmentation of white matter hyperintensities and results of the WMH segmentation challenge},
  author={Kuijf, Hugo J and Biesbroek, J Matthijs and De Bresser, Jeroen and Heinen, Rutger and Andermatt, Simon and Bento, Mariana and Berseth, Matt and Belyaev, Mikhail and Cardoso, M Jorge and Casamitjana, Adria and others},
  journal={IEEE Transactions on Medical Imaging},
  volume={38},
  number={11},
  pages={2556--2568},
  year={2019},
  publisher={IEEE}
}

GitHub Events

Total
  • Issues event: 4
  • Watch event: 44
  • Issue comment event: 6
  • Push event: 5
  • Fork event: 1
Last Year
  • Issues event: 4
  • Watch event: 44
  • Issue comment event: 6
  • Push event: 5
  • Fork event: 1