ENGLISH | 简体中文
El Niño/Southern Oscillation (ENSO) phenomenon has great impacts on regional ecosystem, and therefore, an accurate
forecasting of ENSO brings great regional benefits. However, forecasting ENSO with more than one years' horizon remains
problematic. Recently, the convolutional neural network (CNN) has been proven to be effective tool in forecasting ENSO.
In this model, we implemented the training and evaluation process of a CNN, which is used to forecast ENSO,
with meteorological data.
paper: Ham, Y.-G., J.-H. Kim, and J.-J. Luo, 2019:
Deep learning for multi-year ENSO forecasts. Nature, 573, 568–572.
The training dataset and pretrained checkpoint files will be downloaded automatically at the first launch.
.npy
files
process.py
./data
directory, the directory structure is as follows:├── data
│ ├── htmp_data
│ ├── train_data
│ │ ├── ACCESS-CM2
│ │ ├── CCSM4
│ │ ├── CESM1-CAM5
│ │ ├── ...
│ │ └── obs
│ └── var_data
If you need to download the dataset or checkpoint files manually,
please visit this link.
After installing MindSpore via the official website and the required dataset above, you can start training
and evaluation as follows:
Default:
python train.py
Full command:
python train.py \
--save_ckpt true \
--load_ckpt false \
--save_ckpt_path ./checkpoints \
--load_ckpt_path ./checkpoints/exp2_aftertrain/enso_float16.ckpt \
--save_data true\
--load_data_path ./data \
--save_data_path ./data \
--save_figure true \
--figures_path ./figures \
--log_path ./logs \
--print_interval 10 \
--lr 0.01 \
--epochs 20 \
--batch_size 400 \
--skip_aftertrain false \
--epochs_after 5 \
--batch_size_after 30 \
--lr_after 1e-6 \
--download_data enso \
--force_download false \
--amp_level O3 \
--device_id 0 \
--mode 0
├── enso
│ ├── checkpoints # checkpoints files
│ ├── data # data folder
│ │ ├── htmp_data # save folder for validation result
│ │ ├── var_data # validation data
│ │ └── train_data # training data
│ ├── figures # plot figures
│ ├── logs # log files
│ ├── src # source codes
│ │ ├── network.py # neural network
│ │ ├── plot.py # plot functions
│ │ └── process.py # data preparation
│ ├── config.yaml # hyper-parameters configuration
│ ├── README.md # English model descriptions
│ ├── README_CN.md # Chinese model description
│ ├── train.py # python training script
│ └── eval.py # python evaluation script
Important parameters in train.py are as follows:
parameter | description | default value |
---|---|---|
save_ckpt | whether save checkpoint or not | true |
load_ckpt | whether load checkpoint or not | false |
save_ckpt_path | checkpoint saving path | ./checkpoints |
load_ckpt_path | checkpoint loading path | ./checkpoints/exp2_aftertrain/enso_float16.ckpt |
save_data | whether save data output or not | true |
load_data_path | path to load data | ./data |
save_data_path | path to save data | ./data |
save_figure | whether save and plot figures or not | true |
figures_path | figures saving path | ./figures |
log_path | log saving path | ./logs |
print_interval | interval for time and loss printing | 10 |
lr | learning rate | 0.01 |
epochs | number of epochs | 20 |
batch_size | size of data batch | 400 |
skip_aftertrain | whether skip the after train process | false |
epochs_after | number of epochs in after train | 5 |
batch_size_after | size of data batch for after train | 30 |
lr_after | learning rate for after train | 1e-6 |
download_data | necessary dataset and/or checkpoints | enso |
force_download | whether download the dataset or not by force | false |
amp_level | MindSpore auto mixed precision level | O3 |
device_id | device id to set | None |
mode | MindSpore Graph mode(0) or Pynative mode(1) | 0 |
running on GPU/Ascend
python train.py
The loss values during training will be printed in the console, which can also be inspected after training in log
file.
# python train.py
...
epoch: 1 step: 1, loss is 0.9130635857582092
epoch: 1 step: 2, loss is 1.0354164838790894
epoch: 1 step: 3, loss is 0.8914494514465332
epoch: 1 step: 4, loss is 0.9377754330635071
epoch: 1 step: 5, loss is 1.0472232103347778
epoch: 1 step: 6, loss is 1.0421113967895508
epoch: 1 step: 7, loss is 1.100639820098877
epoch: 1 step: 8, loss is 0.9849204421043396
...
After training, you can still review the training process through the log file saved in log_path
, ./logs
directory
by default.
The model checkpoint will be saved in save_ckpt_path
, ./checkpoint
directory by default.
Before running the command below, please check the checkpoint loading path load_ckpt_path
specified
in config.yaml
for evaluation.
python eval.py
You can view the process and results through the log_path
, ./logs
by default.
The result pictures are saved in figures_path
, ./figures
by default.
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》