229-swin-umamba-mamba-based-unet-with-imagenet-based-pretraining
https://github.com/szu-advtech-2024/229-swin-umamba-mamba-based-unet-with-imagenet-based-pretraining
Science Score: 41.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
○.zenodo.json file
-
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (9.7%) to scientific vocabulary
Scientific Fields
Artificial Intelligence and Machine Learning
Computer Science -
60% confidence
Last synced: 4 months ago
·
JSON representation
·
Repository
Basic Info
- Host: GitHub
- Owner: SZU-AdvTech-2024
- Default Branch: main
- Size: 0 Bytes
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Created 12 months ago
· Last pushed 12 months ago
Metadata Files
Citation
https://github.com/SZU-AdvTech-2024/229-Swin-UMamba-Mamba-based-UNet-with-ImageNet-based-Pretraining/blob/main/
# Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining Official repository for: *[Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining](https://arxiv.org/abs/2402.03302)*  ## Main Results - AbdomenMRI- Endoscopy
- Microscopy
## Installation **Step-1:** Create a new conda environment & install requirements ```shell conda create -n swin_umamba python=3.10 conda activate swin_umamba pip install torch==2.0.1 torchvision==0.15.2 pip install causal-conv1d==1.1.1 pip install mamba-ssm pip install torchinfo timm numba ``` **Step-2:** Install Swin-UMamba ```shell git clone https://github.com/JiarunLiu/Swin-UMamba cd Swin-UMamba/swin_umamba pip install -e . ``` ## Prepare data & pretrained model **Dataset:** We use the same data & processing strategy following U-Mamba. Download dataset from [U-Mamba](https://github.com/bowang-lab/U-Mamba) and put them into the data folder. Then preprocess the dataset with following command: ```shell nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity ``` **ImageNet pretrained model:** We use the ImageNet pretrained VMamba-Tiny model from [VMamba](https://github.com/MzeroMiko/VMamba). You need to download the model checkpoint and put it into `data/pretrained/vmamba/vmamba_tiny_e292.pth` ``` wget https://github.com/MzeroMiko/VMamba/releases/download/%2320240218/vssmtiny_dp01_ckpt_epoch_292.pth mv vssmtiny_dp01_ckpt_epoch_292.pth data/pretrained/vmamba/vmamba_tiny_e292.pth ``` ## Training Using the following command to train & evaluate Swin-UMamba ```shell # AbdomenMR dataset bash scripts/train_AbdomenMR.sh MODEL_NAME # Endoscopy dataset bash scripts/train_Endoscopy.sh MODEL_NAME # Microscopy dataset bash scripts/train_Microscopy.sh MODEL_NAME ``` Here `MODEL_NAME` can be: - `nnUNetTrainerSwinUMamba`: Swin-UMamba model with ImageNet pretraining - `nnUNetTrainerSwinUMambaD`: Swin-UMamba$\dagger$ model with ImageNet pretraining - `nnUNetTrainerSwinUMambaScratch`: Swin-UMamba model without ImageNet pretraining - `nnUNetTrainerSwinUMambaDScratch`: Swin-UMamba$\dagger$ model without ImageNet pretraining You can download our model checkpoints [here](https://drive.google.com/drive/folders/1zOt0ZfQPjoPdY37NfLKevYs4x5eClThN?usp=sharing). ## Acknowledgements We thank the authors of [nnU-Net](https://github.com/MIC-DKFZ/nnUNet), [Mamba](https://github.com/state-spaces/mamba), [UMamba](https://github.com/bowang-lab/U-Mamba), [VMamba](https://github.com/MzeroMiko/VMamba), and [Swin-Unet](https://github.com/HuCaoFighting/Swin-Unet) for making their valuable code & data publicly available. ## Citation ``` @article{Swin-UMamba, title={Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining}, author={Jiarun Liu and Hao Yang and Hong-Yu Zhou and Yan Xi and Lequan Yu and Yizhou Yu and Yong Liang and Guangming Shi and Shaoting Zhang and Hairong Zheng and Shanshan Wang}, journal={arXiv preprint arXiv:2402.03302}, year={2024} } ```
Owner
- Name: SZU-AdvTech-2024
- Login: SZU-AdvTech-2024
- Kind: organization
- Repositories: 1
- Profile: https://github.com/SZU-AdvTech-2024
Citation (citation.txt)
@article{REPO229,
author = "Liu, J. et al.",
journal = "Medical Image Computing and Computer Assisted Intervention – MICCAI 2024",
number = "",
title = "{Swin-UMamba: Mamba-Based UNet with ImageNet-Based Pretraining}",
volume = "15009",
year = "2024"
}
GitHub Events
Total
- Watch event: 2
- Push event: 2
- Create event: 3
Last Year
- Watch event: 2
- Push event: 2
- Create event: 3
- Endoscopy
- Microscopy
## Installation
**Step-1:** Create a new conda environment & install requirements
```shell
conda create -n swin_umamba python=3.10
conda activate swin_umamba
pip install torch==2.0.1 torchvision==0.15.2
pip install causal-conv1d==1.1.1
pip install mamba-ssm
pip install torchinfo timm numba
```
**Step-2:** Install Swin-UMamba
```shell
git clone https://github.com/JiarunLiu/Swin-UMamba
cd Swin-UMamba/swin_umamba
pip install -e .
```
## Prepare data & pretrained model
**Dataset:**
We use the same data & processing strategy following U-Mamba. Download dataset from [U-Mamba](https://github.com/bowang-lab/U-Mamba) and put them into the data folder. Then preprocess the dataset with following command:
```shell
nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity
```
**ImageNet pretrained model:**
We use the ImageNet pretrained VMamba-Tiny model from [VMamba](https://github.com/MzeroMiko/VMamba). You need to download the model checkpoint and put it into `data/pretrained/vmamba/vmamba_tiny_e292.pth`
```
wget https://github.com/MzeroMiko/VMamba/releases/download/%2320240218/vssmtiny_dp01_ckpt_epoch_292.pth
mv vssmtiny_dp01_ckpt_epoch_292.pth data/pretrained/vmamba/vmamba_tiny_e292.pth
```
## Training
Using the following command to train & evaluate Swin-UMamba
```shell
# AbdomenMR dataset
bash scripts/train_AbdomenMR.sh MODEL_NAME
# Endoscopy dataset
bash scripts/train_Endoscopy.sh MODEL_NAME
# Microscopy dataset
bash scripts/train_Microscopy.sh MODEL_NAME
```
Here `MODEL_NAME` can be:
- `nnUNetTrainerSwinUMamba`: Swin-UMamba model with ImageNet pretraining
- `nnUNetTrainerSwinUMambaD`: Swin-UMamba$\dagger$ model with ImageNet pretraining
- `nnUNetTrainerSwinUMambaScratch`: Swin-UMamba model without ImageNet pretraining
- `nnUNetTrainerSwinUMambaDScratch`: Swin-UMamba$\dagger$ model without ImageNet pretraining
You can download our model checkpoints [here](https://drive.google.com/drive/folders/1zOt0ZfQPjoPdY37NfLKevYs4x5eClThN?usp=sharing).
## Acknowledgements
We thank the authors of [nnU-Net](https://github.com/MIC-DKFZ/nnUNet), [Mamba](https://github.com/state-spaces/mamba), [UMamba](https://github.com/bowang-lab/U-Mamba), [VMamba](https://github.com/MzeroMiko/VMamba), and [Swin-Unet](https://github.com/HuCaoFighting/Swin-Unet) for making their valuable code & data publicly available.
## Citation
```
@article{Swin-UMamba,
title={Swin-UMamba: Mamba-based UNet with ImageNet-based pretraining},
author={Jiarun Liu and Hao Yang and Hong-Yu Zhou and Yan Xi and Lequan Yu and Yizhou Yu and Yong Liang and Guangming Shi and Shaoting Zhang and Hairong Zheng and Shanshan Wang},
journal={arXiv preprint arXiv:2402.03302},
year={2024}
}
```