|
- [ERROR] PRE_ACT(1262474,ffffb23248d0,python):2022-04-29-22:22:37.520.852 [mindspore/ccsrc/backend/optimizer/ascend/format_type/check_consistency.cc:76] CheckDataTypeForConsistency] Found inconsistent dtype! input dtype 0: kNumberTypeFloat32, selected dtype: kNumberTypeFloat16
- [CRITICAL] PRE_ACT(1262474,ffffb23248d0,python):2022-04-29-22:22:37.521.053 [mindspore/ccsrc/backend/optimizer/ascend/format_type/check_consistency.cc:98] Process] Found inconsistent format or data type! Op: Cast[kernel_graph_20:dx{[0]: ValueNode<Primitive> Cast, [1]: ValueNode<Tensor> Tensor(shape=[1, 2, 1, 8], dtype=Float32, value=
- [[[[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00 1.00000000e+00 1.00000000e+00]]
- [[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00 1.00000000e+00 1.00000000e+00]]]])}], fullname: Default/Cast-op389234
- Traceback (most recent call last):
- File "train-test.py", line 148, in main
- run(args)
- File "train-test.py", line 144, in run
- model.train(epoch=10, train_dataset=tr_loader, callbacks=cb, dataset_sink_mode=False)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 774, in train
- sink_size=sink_size)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 87, in wrapper
- func(self, *args, **kwargs)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 534, in _train
- self._train_process(epoch, train_dataset, list_callback, cb_params)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/train/model.py", line 667, in _train_process
- outputs = self._train_network(*next_element)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 479, in __call__
- out = self.compile_and_run(*args)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 805, in compile_and_run
- self.compile(*inputs)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/nn/cell.py", line 792, in compile
- _cell_graph_executor.compile(self, *inputs, phase=self.phase, auto_parallel_mode=self._auto_parallel_mode)
- File "/root/miniconda3/envs/ci/lib/python3.7/site-packages/mindspore/common/api.py", line 632, in compile
- result = self._graph_executor.compile(obj, args_list, phase, self._use_vm_mode())
- RuntimeError: mindspore/ccsrc/backend/optimizer/ascend/format_type/check_consistency.cc:98 Process] Found inconsistent format or data type! Op: Cast[kernel_graph_20:dx{[0]: ValueNode<Primitive> Cast, [1]: ValueNode<Tensor> Tensor(shape=[1, 2, 1, 8], dtype=Float32, value=
- [[[[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00 1.00000000e+00 1.00000000e+00]]
- [[ 1.00000000e+00 1.00000000e+00 1.00000000e+00 ... 1.00000000e+00 1.00000000e+00 1.00000000e+00]]]])}], fullname: Default/Cast-op389234
-
- Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
- ((), {'N': 128, 'L': 8, 'H': 128, 'R': 6, 'C': 2, 'input_normalize': False, 'sr': 8000, 'segment': 4})
- ()
- Drop 1 utts(0.00 h) which is short than 32000 samples
- SWave<
- (encoder): Encoder<
- (conv): Conv1d<input_channels=1, output_channels=128, kernel_size=(1, 8), stride=(1, 4), pad_mode=valid, padding=(0, 0, 0, 0), dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros>
- >
- (decoder): Decoder<>
- (separator): Separator<
- (rnn_model): DPMulCat<
- (rows_grnn): CellList<
- (0): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (1): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (2): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (3): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (4): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (5): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- >
- (cols_grnn): CellList<
- (0): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (1): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (2): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (3): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (4): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- (5): MulCatBlock<
- (rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (gate_rnn): LSTM<
- (rnn): _DynamicLSTMAscend<>
- (dropout_op): Dropout<keep_prob=1.0>
- >
- (gate_rnn_proj): Dense<input_channels=256, output_channels=128, has_bias=True>
- (block_projection): Dense<input_channels=256, output_channels=128, has_bias=True>
- >
- >
- (rows_normalization): CellList<
- (0): ByPass<>
- (1): ByPass<>
- (2): ByPass<>
- (3): ByPass<>
- (4): ByPass<>
- (5): ByPass<>
- >
- (cols_normalization): CellList<
- (0): ByPass<>
- (1): ByPass<>
- (2): ByPass<>
- (3): ByPass<>
- (4): ByPass<>
- (5): ByPass<>
- >
- (output): SequentialCell<
- (0): ReLU<>
- (1): Conv2d<input_channels=128, output_channels=256, kernel_size=(1, 1), stride=(1, 1), pad_mode=same, padding=0, dilation=(1, 1), group=1, has_bias=False, weight_init=normal, bias_init=zeros, format=NCHW>
- >
- >
- >
- >
|