Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
1019364238@qq.com bddeb4ca6b | 1 year ago | |
---|---|---|
__pycache__ | 1 year ago | |
assets | 1 year ago | |
dataset | 1 year ago | |
optimizers | 1 year ago | |
outputs | 1 year ago | |
pretrained_models | 1 year ago | |
runs | 1 year ago | |
utils | 1 year ago | |
.gitignore | 1 year ago | |
README.md | 1 year ago | |
dataset_properties.pkl | 1 year ago | |
main.py | 1 year ago | |
nnUNetPlansv2.1_plans_3D.pkl | 1 year ago | |
requirements.txt | 1 year ago | |
test.py | 1 year ago | |
trainer.py | 1 year ago |
This repository contains the code for Swin UNETR [1,2]. Swin UNETR is the state-of-the-art on Medical Segmentation
Decathlon (MSD) and Beyond the Cranial Vault (BTCV) Segmentation Challenge dataset. In [1], a novel methodology is devised for pre-training Swin UNETR backbone in a self-supervised
manner. We provide the option for training Swin UNETR by fine-tuning from pre-trained self-supervised weights or from scratch.
A tutorial for BTCV multi-organ segmentation using Swin UNETR model is provided in the following link.
Dependencies can be installed using:
pip install -r requirements.txt
Please download the self-supervised pre-trained weights for Swin UNETR backbone (CVPR paper [1]) from this link.
We provide several pre-trained models on BTCV dataset in the following.
Name | Dice (overlap=0.7) | Dice (overlap=0.5) | Feature Size | # params (M) | Self-Supervised Pre-trained | Download |
---|---|---|---|---|---|---|
Swin UNETR/Base | 82.25 | 81.86 | 48 | 62.1 | Yes | model |
Swin UNETR/Small | 79.79 | 79.34 | 24 | 15.7 | No | model |
Swin UNETR/Tiny | 72.05 | 70.35 | 12 | 4.0 | No | model |
The training data is from the BTCV challenge dataset.
Please download the json file from this link.
We provide the json file that is used to train our models in the following link.
Once the json file is downloaded, please place it in the same folder as the dataset. Note that you need to provide the location of your dataset directory by using --data_dir
.
A Swin UNETR network with standard hyper-parameters for multi-organ semantic segmentation (BTCV dataset) is be defined as:
model = SwinUNETR(img_size=(96,96,96),
in_channels=1,
out_channels=14,
feature_size=48,
use_checkpoint=True,
)
The above Swin UNETR model is used for CT images (1-channel input) with input image size (96, 96, 96)
and for 14
class segmentation outputs and feature size of 48
.
More details can be found in [1]. In addition, use_checkpoint=True
enables the use of gradient checkpointing for memory-efficient training.
Using the default values for hyper-parameters, the following command can be used to initiate training using PyTorch native AMP package:
python main.py
--feature_size=32
--batch_size=1
--logdir=unetr_test
--fold=0
--optim_lr=1e-4
--lrschedule=warmup_cosine
--infer_overlap=0.5
--save_checkpoint
--data_dir=/dataset/dataset0/
To train a Swin UNETR
with self-supervised encoder weights on a single GPU with gradient check-pointing:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=48 --use_ssl_pretrained\
--roi_x=96 --roi_y=96 --roi_z=96 --use_checkpoint --batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
python main.py --json_list=dataset.json --data_dir=./dataset/Task245_Prostate/ --feature_size=48 --roi_x=96 --roi_y=96 --roi_z=96 --use_checkpoint --batch_size=1 --max_epochs=2 --save_checkpoint --out_channels 2
To train a Swin UNETR
with self-supervised encoder weights on a multiple GPUs without gradient check-pointing
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=48 --use_ssl_pretrained\
--roi_x=96 --roi_y=96 --roi_z=96 --distributed --optim_lr=2e-4 --batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
To train a Swin UNETR
from scratch on a single GPU without AMP:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=48 --noamp\
--roi_x=96 --roi_y=96 --roi_z=96 --use_checkpoint --batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
To train a Swin UNETR
from scratch on a single GPU without AMP:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=24\
--roi_x=96 --roi_y=96 --roi_z=96 --batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
To train a Swin UNETR
from scratch on a single GPU without AMP:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=12\
--roi_x=96 --roi_y=96 --roi_z=96 --batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
To evaluate a Swin UNETR
on a single GPU, place the model checkpoint in pretrained_models
folder and
provide its name using --pretrained_model_name
:
python test.py --json_list=<json-path> --data_dir=<data-path> --feature_size=<feature-size>\
--infer_overlap=0.5 --pretrained_model_name=<model-name>
Please download the checkpoints for models presented in the above table and place the model checkpoints in pretrained_models
folder.
Use the following commands for finetuning.
To finetune a Swin UNETR
base model on a single GPU with gradient check-pointing:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=48 \
--pretrained_model_name='swin_unetr.base_5000ep_f48_lr2e-4_pretrained.pt' --resume_ckpt --use_checkpoint \
--batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
To finetune a Swin UNETR
small model on a single GPU with gradient check-pointing:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=24 \
--pretrained_model_name='swin_unetr.small_5000ep_f24_lr2e-4_pretrained.pt' --resume_ckpt --use_checkpoint \
--batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
To finetune a Swin UNETR
tiny model on a single GPU with gradient check-pointing:
python main.py --json_list=<json-path> --data_dir=<data-path> --feature_size=12 \
--pretrained_model_name='swin_unetr.tiny_5000ep_f12_lr2e-4_pretrained.pt' --resume_ckpt --use_checkpoint \
--batch_size=<batch-size> --max_epochs=<total-num-epochs> --save_checkpoint
By following the commands for evaluating Swin UNETR
in the above, test.py
saves the segmentation outputs
in the original spacing in a new folder based on the name of the experiment which is passed by --exp_name
.
If you find this repository useful, please consider citing the following papers:
@inproceedings{tang2022self,
title={Self-supervised pre-training of swin transformers for 3d medical image analysis},
author={Tang, Yucheng and Yang, Dong and Li, Wenqi and Roth, Holger R and Landman, Bennett and Xu, Daguang and Nath, Vishwesh and Hatamizadeh, Ali},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20730--20740},
year={2022}
}
@article{hatamizadeh2022swin,
title={Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images},
author={Hatamizadeh, Ali and Nath, Vishwesh and Tang, Yucheng and Yang, Dong and Roth, Holger and Xu, Daguang},
journal={arXiv preprint arXiv:2201.01266},
year={2022}
}
[1]: Tang, Y., Yang, D., Li, W., Roth, H.R., Landman, B., Xu, D., Nath, V. and Hatamizadeh, A., 2022. Self-supervised pre-training of swin transformers for 3d medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 20730-20740).
[2]: Hatamizadeh, A., Nath, V., Tang, Y., Yang, D., Roth, H. and Xu, D., 2022. Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images. arXiv preprint arXiv:2201.01266.
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》