Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
zhangy03 46aaa18849 | 2 years ago | |
---|---|---|
.. | ||
scripts | 2 years ago | |
src | 2 years ago | |
README.md | 2 years ago | |
client | 2 years ago | |
config.yaml | 2 years ago | |
eval.py | 2 years ago | |
export.py | 2 years ago | |
gpu_resnet_benchmark.py | 2 years ago | |
mindspore_hub_conf.py | 2 years ago | |
modelCheckPoint.py | 2 years ago | |
start.sh | 2 years ago | |
start_docker.sh | 2 years ago | |
start_server.sh | 2 years ago | |
terminate_client.sh | 2 years ago | |
thgy_client.py | 2 years ago | |
train.py | 2 years ago |
ResNet (residual neural network) was proposed by Kaiming He and other four Chinese of Microsoft Research Institute. Through the use of ResNet unit, it successfully trained 152 layers of neural network, and won the championship in ilsvrc2015. The error rate on top 5 was 3.57%, and the parameter quantity was lower than vggnet, so the effect was very outstanding. Traditional convolution network or full connection network will have more or less information loss. At the same time, it will lead to the disappearance or explosion of gradient, which leads to the failure of deep network training. ResNet solves this problem to a certain extent. By passing the input information to the output, the integrity of the information is protected. The whole network only needs to learn the part of the difference between input and output, which simplifies the learning objectives and difficulties.The structure of ResNet can accelerate the training of neural network very quickly, and the accuracy of the model is also greatly improved. At the same time, ResNet is very popular, even can be directly used in the concept net network.
These are examples of training ResNet50/ResNet101/SE-ResNet50 with CIFAR-10/ImageNet2012 dataset in MindSpore.ResNet50 and ResNet101 can reference paper 1 below, and SE-ResNet50 is a variant of ResNet50 which reference paper 2 and paper 3 below, Training SE-ResNet50 for just 24 epochs using 8 Ascend 910, we can reach top-1 accuracy of 75.9%.(Training ResNet101 with dataset CIFAR-10 and SE-ResNet50 with CIFAR-10 is not supported yet.)
1.paper:Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. "Deep Residual Learning for Image Recognition"
2.paper:Jie Hu, Li Shen, Samuel Albanie, Gang Sun, Enhua Wu. "Squeeze-and-Excitation Networks"
3.paper:Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. "Bag of Tricks for Image Classification with Convolutional Neural Networks"
The overall network architecture of ResNet is show below:
Link
Dataset used: CIFAR-10
├─cifar-10-batches-bin
│
└─cifar-10-verify-bin
Dataset used: ImageNet2012
└─dataset
├─ilsvrc # train dataset
└─validation_preprocess # evaluate dataset
The mixed precision training method accelerates the deep learning neural network training process by using both the single-precision and half-precision data types, and maintains the network precision achieved by the single-precision training at the same time. Mixed precision training can accelerate the computation process, reduce memory usage, and enable a larger model or batch size to be trained on specific hardware.
For FP16 operators, if the input data type is FP32, the backend of MindSpore will automatically handle it with reduced precision. Users could check the reduced-precision operators by enabling INFO log and then searching ‘reduce precision’.
After installing MindSpore via the official website, you can start training and evaluation as follows:
# distributed training
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# standalone training
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
[PRETRAINED_CKPT_PATH](optional)
# run evaluation example
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
# distributed training example
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# standalone training example
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# infer example
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
.
└──resnet
├── README.md
├── scripts
├── run_distribute_train.sh # launch ascend distributed training(8 pcs)
├── run_parameter_server_train.sh # launch ascend parameter server training(8 pcs)
├── run_eval.sh # launch ascend evaluation
├── run_standalone_train.sh # launch ascend standalone training(1 pcs)
├── run_distribute_train_gpu.sh # launch gpu distributed training(8 pcs)
├── run_parameter_server_train_gpu.sh # launch gpu parameter server training(8 pcs)
├── run_eval_gpu.sh # launch gpu evaluation
├── run_standalone_train_gpu.sh # launch gpu standalone training(1 pcs)
└── run_gpu_resnet_benchmark.sh # GPU benchmark for resnet50 with imagenet2012(1 pcs)
├── src
├── config.py # parameter configuration
├── dataset.py # data preprocessing
├── CrossEntropySmooth.py # loss definition for ImageNet2012 dataset
├── lr_generator.py # generate learning rate for each step
├── resnet.py # resnet backbone, including resnet50 and resnet101 and se-resnet50
└── resnet_gpu_benchmark.py # resnet50 for GPU benchmark
├── export.py # export model for inference
├── mindspore_hub_conf.py # mindspore hub interface
├── eval.py # eval net
├── train.py # train net
└── gpu_resent_benchmark.py # GPU benchmark for resnet50
Parameters for both training and evaluation can be set in config.py.
"class_num": 10, # dataset class num
"batch_size": 32, # batch size of input tensor
"loss_scale": 1024, # loss scale
"momentum": 0.9, # momentum
"weight_decay": 1e-4, # weight decay
"epoch_size": 90, # only valid for taining, which is always 1 for inference
"pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True, # whether save checkpoint or not
"save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last step
"keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./", # path to save checkpoint
"warmup_epochs": 5, # number of warmup epoch
"lr_decay_mode": "poly" # decay mode can be selected in steps, ploy and default
"lr_init": 0.01, # initial learning rate
"lr_end": 0.00001, # final learning rate
"lr_max": 0.1, # maximum learning rate
"class_num": 1001, # dataset class number
"batch_size": 256, # batch size of input tensor
"loss_scale": 1024, # loss scale
"momentum": 0.9, # momentum optimizer
"weight_decay": 1e-4, # weight decay
"epoch_size": 90, # only valid for taining, which is always 1 for inference
"pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True, # whether save checkpoint or not
"save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
"keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"warmup_epochs": 0, # number of warmup epoch
"lr_decay_mode": "Linear", # decay mode for generating learning rate
"use_label_smooth": True, # label smooth
"label_smooth_factor": 0.1, # label smooth factor
"lr_init": 0, # initial learning rate
"lr_max": 0.8, # maximum learning rate
"lr_end": 0.0, # minimum learning rate
"class_num": 1001, # dataset class number
"batch_size": 32, # batch size of input tensor
"loss_scale": 1024, # loss scale
"momentum": 0.9, # momentum optimizer
"weight_decay": 1e-4, # weight decay
"epoch_size": 120, # epoch size for training
"pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True, # whether save checkpoint or not
"save_checkpoint_epochs": 5, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
"keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"warmup_epochs": 0, # number of warmup epoch
"lr_decay_mode": "cosine" # decay mode for generating learning rate
"use_label_smooth": True, # label_smooth
"label_smooth_factor": 0.1, # label_smooth_factor
"lr": 0.1 # base learning rate
"class_num": 1001, # dataset class number
"batch_size": 32, # batch size of input tensor
"loss_scale": 1024, # loss scale
"momentum": 0.9, # momentum optimizer
"weight_decay": 1e-4, # weight decay
"epoch_size": 28 , # epoch size for creating learning rate
"train_epoch_size": 24 # actual train epoch size
"pretrain_epoch_size": 0, # epoch size that model has been trained before loading pretrained checkpoint, actual training epoch size is equal to epoch_size minus pretrain_epoch_size
"save_checkpoint": True, # whether save checkpoint or not
"save_checkpoint_epochs": 4, # the epoch interval between two checkpoints. By default, the last checkpoint will be saved after the last epoch
"keep_checkpoint_max": 10, # only keep the last keep_checkpoint_max checkpoint
"save_checkpoint_path": "./", # path to save checkpoint relative to the executed path
"warmup_epochs": 3, # number of warmup epoch
"lr_decay_mode": "cosine" # decay mode for generating learning rate
"use_label_smooth": True, # label_smooth
"label_smooth_factor": 0.1, # label_smooth_factor
"lr_init": 0.0, # initial learning rate
"lr_max": 0.3, # maximum learning rate
"lr_end": 0.0001, # end learning rate
# distributed training
Usage: sh run_distribute_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# standalone training
Usage: sh run_standalone_train.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH]
[PRETRAINED_CKPT_PATH](optional)
# run evaluation example
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
For distributed training, a hccl configuration file with JSON format needs to be created in advance.
Please follow the instructions in the link hccn_tools.
Training result will be stored in the example path, whose folder name begins with "train" or "train_parallel". Under this, you can find checkpoint file together with result like the followings in log.
# distributed training example
sh run_distribute_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# standalone training example
sh run_standalone_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# infer example
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
# gpu benchmark example
sh run_gpu_resnet_benchmark.sh [IMAGENET_DATASET_PATH] [BATCH_SIZE](optional) [DEVICE_NUM](optional)
sh run_parameter_server_train.sh [resnet50|resnet101] [cifar10|imagenet2012] [RANK_TABLE_FILE] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
sh run_parameter_server_train_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [PRETRAINED_CKPT_PATH](optional)
# distribute training result(8 pcs)
epoch: 1 step: 195, loss is 1.9601055
epoch: 2 step: 195, loss is 1.8555021
epoch: 3 step: 195, loss is 1.6707983
epoch: 4 step: 195, loss is 1.8162166
epoch: 5 step: 195, loss is 1.393667
...
# distribute training result(8 pcs)
epoch: 1 step: 5004, loss is 4.8995576
epoch: 2 step: 5004, loss is 3.9235563
epoch: 3 step: 5004, loss is 3.833077
epoch: 4 step: 5004, loss is 3.2795618
epoch: 5 step: 5004, loss is 3.1978393
...
# distribute training result(8 pcs)
epoch: 1 step: 5004, loss is 4.805483
epoch: 2 step: 5004, loss is 3.2121816
epoch: 3 step: 5004, loss is 3.429647
epoch: 4 step: 5004, loss is 3.3667371
epoch: 5 step: 5004, loss is 3.1718972
...
# distribute training result(8 pcs)
epoch: 1 step: 5004, loss is 5.1779146
epoch: 2 step: 5004, loss is 4.139395
epoch: 3 step: 5004, loss is 3.9240637
epoch: 4 step: 5004, loss is 3.5011306
epoch: 5 step: 5004, loss is 3.3501816
...
# ========START RESNET50 GPU BENCHMARK========
step time: 12416.098 ms, fps: 412 img/sec. epoch: 1 step: 20, loss is 6.940182
step time: 3472.037 ms, fps: 1474 img/sec. epoch: 2 step: 20, loss is 7.078993
step time: 3469.523 ms, fps: 1475 img/sec. epoch: 3 step: 20, loss is 7.559594
step time: 3460.311 ms, fps: 1479 img/sec. epoch: 4 step: 20, loss is 6.920937
step time: 3460.543 ms, fps: 1479 img/sec. epoch: 5 step: 20, loss is 6.814013
...
# evaluation
Usage: sh run_eval.sh [resnet50|resnet101|se-resnet50] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
# evaluation example
sh run_eval.sh resnet50 cifar10 ~/cifar10-10-verify-bin ~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
checkpoint can be produced in training process.
sh run_eval_gpu.sh [resnet50|resnet101] [cifar10|imagenet2012] [DATASET_PATH] [CHECKPOINT_PATH]
Evaluation result will be stored in the example path, whose folder name is "eval". Under this, you can find result like the followings in log.
result: {'acc': 0.91446314102564111} ckpt=~/resnet50_cifar10/train_parallel0/resnet-90_195.ckpt
result: {'acc': 0.7671054737516005} ckpt=train_parallel0/resnet-90_5004.ckpt
result: {'top_5_accuracy': 0.9429417413572343, 'top_1_accuracy': 0.7853513124199744} ckpt=train_parallel0/resnet-120_5004.ckpt
result: {'top_5_accuracy': 0.9342589628681178, 'top_1_accuracy': 0.768065781049936} ckpt=train_parallel0/resnet-24_5004.ckpt
Parameters | Ascend 910 | GPU |
---|---|---|
Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G |
uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
Dataset | CIFAR-10 | CIFAR-10 |
Training Parameters | epoch=90, steps per epoch=195, batch_size = 32 | epoch=90, steps per epoch=195, batch_size = 32 |
Optimizer | Momentum | Momentum |
Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
outputs | probability | probability |
Loss | 0.000356 | 0.000716 |
Speed | 18.4ms/step(8pcs) | 69ms/step(8pcs) |
Total time | 6 mins | 20.2 mins |
Parameters (M) | 25.5 | 25.5 |
Checkpoint for Fine tuning | 179.7M (.ckpt file) | 179.7M (.ckpt file) |
Scripts | Link | Link |
Parameters | Ascend 910 | GPU |
---|---|---|
Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G |
uploaded Date | 04/01/2020 (month/day/year) ; | 08/01/2020 (month/day/year) |
MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
Dataset | ImageNet2012 | ImageNet2012 |
Training Parameters | epoch=90, steps per epoch=626, batch_size = 256 | epoch=90, steps per epoch=626, batch_size = 256 |
Optimizer | Momentum | Momentum |
Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
outputs | probability | probability |
Loss | 1.8464266 | 1.9023 |
Speed | 118ms/step(8pcs) | 270ms/step(8pcs) |
Total time | 114 mins | 260 mins |
Parameters (M) | 25.5 | 25.5 |
Checkpoint for Fine tuning | 197M (.ckpt file) | 197M (.ckpt file) |
Scripts | Link | Link |
Parameters | Ascend 910 | GPU |
---|---|---|
Model Version | ResNet101 | ResNet101 |
Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G | GPU(Tesla V100 SXM2),CPU 2.1GHz 24cores,Memory 128G |
uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
Dataset | ImageNet2012 | ImageNet2012 |
Training Parameters | epoch=120, steps per epoch=5004, batch_size = 32 | epoch=120, steps per epoch=5004, batch_size = 32 |
Optimizer | Momentum | Momentum |
Loss Function | Softmax Cross Entropy | Softmax Cross Entropy |
outputs | probability | probability |
Loss | 1.6453942 | 1.7023412 |
Speed | 30.3ms/step(8pcs) | 108.6ms/step(8pcs) |
Total time | 301 mins | 1100 mins |
Parameters (M) | 44.6 | 44.6 |
Checkpoint for Fine tuning | 343M (.ckpt file) | 343M (.ckpt file) |
Scripts | Link | Link |
Parameters | Ascend 910 |
---|---|
Model Version | SE-ResNet50 |
Resource | Ascend 910,CPU 2.60GHz 192cores,Memory 755G |
uploaded Date | 08/16/2020 (month/day/year) |
MindSpore Version | 0.7.0-alpha |
Dataset | ImageNet2012 |
Training Parameters | epoch=24, steps per epoch=5004, batch_size = 32 |
Optimizer | Momentum |
Loss Function | Softmax Cross Entropy |
outputs | probability |
Loss | 1.754404 |
Speed | 24.6ms/step(8pcs) |
Total time | 49.3 mins |
Parameters (M) | 25.5 |
Checkpoint for Fine tuning | 215.9M (.ckpt file) |
Scripts | Link |
Parameters | Ascend | GPU |
---|---|---|
Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
Resource | Ascend 910 | GPU |
Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
Dataset | CIFAR-10 | CIFAR-10 |
batch_size | 32 | 32 |
outputs | probability | probability |
Accuracy | 91.44% | 91.37% |
Model for inference | 91M (.air file) |
Parameters | Ascend | GPU |
---|---|---|
Model Version | ResNet50-v1.5 | ResNet50-v1.5 |
Resource | Ascend 910 | GPU |
Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
Dataset | ImageNet2012 | ImageNet2012 |
batch_size | 256 | 256 |
outputs | probability | probability |
Accuracy | 76.70% | 76.74% |
Model for inference | 98M (.air file) |
Parameters | Ascend | GPU |
---|---|---|
Model Version | ResNet101 | ResNet101 |
Resource | Ascend 910 | GPU |
Uploaded Date | 04/01/2020 (month/day/year) | 08/01/2020 (month/day/year) |
MindSpore Version | 0.1.0-alpha | 0.6.0-alpha |
Dataset | ImageNet2012 | ImageNet2012 |
batch_size | 32 | 32 |
outputs | probability | probability |
Accuracy | 78.53% | 78.64% |
Model for inference | 171M (.air file) |
Parameters | Ascend |
---|---|
Model Version | SE-ResNet50 |
Resource | Ascend 910 |
Uploaded Date | 08/16/2020 (month/day/year) |
MindSpore Version | 0.7.0-alpha |
Dataset | ImageNet2012 |
batch_size | 32 |
outputs | probability |
Accuracy | 76.80% |
Model for inference | 109M (.air file) |
In dataset.py, we set the seed inside “create_dataset" function. We also use random seed in train.py.
Please check the official homepage.
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》