Please run the commands in the root path of BasicSR
.
In general, both the training and testing include the following steps:
options
folder. For more specific configuration information, please refer to ConfigPYTHONPATH="./:${PYTHONPATH}" \\
CUDA_VISIBLE_DEVICES=0 \\
python basicsr/train.py -opt options/train/SRResNet_SRGAN/train_MSRResNet_x4.yml
8 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher pytorch
or
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\
./scripts/dist_train.sh 8 options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml
4 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
CUDA_VISIBLE_DEVICES=0,1,2,3 \\
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher pytorch
or
CUDA_VISIBLE_DEVICES=0,1,2,3 \\
./scripts/dist_train.sh 4 options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml
1 GPU
PYTHONPATH="./:${PYTHONPATH}" \\
GLOG_vmodule=MemcachedClient=-1 \\
srun -p [partition] --mpi=pmi2 --job-name=MSRResNetx4 --gres=gpu:1 --ntasks=1 --ntasks-per-node=1 --cpus-per-task=6 --kill-on-bad-exit=1 \\
python -u basicsr/train.py -opt options/train/SRResNet_SRGAN/train_MSRResNet_x4.yml --launcher="slurm"
4 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
GLOG_vmodule=MemcachedClient=-1 \\
srun -p [partition] --mpi=pmi2 --job-name=EDVRMwoTSA --gres=gpu:4 --ntasks=4 --ntasks-per-node=4 --cpus-per-task=4 --kill-on-bad-exit=1 \\
python -u basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher="slurm"
8 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
GLOG_vmodule=MemcachedClient=-1 \\
srun -p [partition] --mpi=pmi2 --job-name=EDVRMwoTSA --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=6 --kill-on-bad-exit=1 \\
python -u basicsr/train.py -opt options/train/EDVR/train_EDVR_M_x4_SR_REDS_woTSA.yml --launcher="slurm"
PYTHONPATH="./:${PYTHONPATH}" \\
CUDA_VISIBLE_DEVICES=0 \\
python basicsr/test.py -opt options/test/SRResNet_SRGAN/test_MSRResNet_x4.yml
8 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\
python -m torch.distributed.launch --nproc_per_node=8 --master_port=4321 basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher pytorch
or
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \\
./scripts/dist_test.sh 8 options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml
4 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
CUDA_VISIBLE_DEVICES=0,1,2,3 \\
python -m torch.distributed.launch --nproc_per_node=4 --master_port=4321 basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher pytorch
or
CUDA_VISIBLE_DEVICES=0,1,2,3 \\
./scripts/dist_test.sh 4 options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml
1 GPU
PYTHONPATH="./:${PYTHONPATH}" \\
GLOG_vmodule=MemcachedClient=-1 \\
srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:1 --ntasks=1 --ntasks-per-node=1 --cpus-per-task=6 --kill-on-bad-exit=1 \\
python -u basicsr/test.py -opt options/test/SRResNet_SRGAN/test_MSRResNet_x4.yml --launcher="slurm"
4 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
GLOG_vmodule=MemcachedClient=-1 \\
srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:4 --ntasks=4 --ntasks-per-node=4 --cpus-per-task=4 --kill-on-bad-exit=1 \\
python -u basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher="slurm"
8 GPUs
PYTHONPATH="./:${PYTHONPATH}" \\
GLOG_vmodule=MemcachedClient=-1 \\
srun -p [partition] --mpi=pmi2 --job-name=test --gres=gpu:8 --ntasks=8 --ntasks-per-node=8 --cpus-per-task=6 --kill-on-bad-exit=1 \\
python -u basicsr/test.py -opt options/test/EDVR/test_EDVR_M_x4_SR_REDS.yml --launcher="slurm"
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》