a1164714
commented 7 months ago
```
python tools/generate_samples_Pangu.py \
> --model-parallel-size 2 \
> --num-layers 31 \
> --hidden-size 2560 \
> --load /root/Pangu-alpha_13B_fp16_mgt/ \
> --num-attention-heads 32 \
> --max-position-embeddings 1024 \
> --tokenizer-type GPT2BPETokenizer \
> --fp16 \
> --batch-size 1 \
> --seq-length 1024 \
> --out-seq-length 50 \
> --temperature 1.0 \
> --vocab-file megatron/tokenizer/bpe_4w_pcl/vocab \
> --num-samples 0 \
> --top_k 2 \
> --finetune
/usr/local/python/lib/python3.8/site-packages/apex/pyprof/__init__.py:5: FutureWarning: pyprof will be removed by the end of June, 2022
warnings.warn("pyprof will be removed by the end of June, 2022", FutureWarning)
WARNING: APEX is not installed, multi_tensor_applier will not be available.
WARNING: APEX is not installed, using torch.nn.LayerNorm instead of apex.normalization.FusedLayerNorm!
Traceback (most recent call last):
File "tools/generate_samples_Pangu.py", line 27, in <module>
from megatron.text_generation_utils import pad_batch, get_batch
File "/root/inference/PanGu-Alpha-GPU/panguAlpha_pytorch/megatron/text_generation_utils.py", line 29, in <module>
from megatron.utils import get_ltor_masks_and_position_ids
File "/root/inference/PanGu-Alpha-GPU/panguAlpha_pytorch/megatron/utils.py", line 28, in <module>
from megatron.fp16 import FP16_Optimizer
File "/root/inference/PanGu-Alpha-GPU/panguAlpha_pytorch/megatron/fp16/__init__.py", line 15, in <module>
from .fp16util import (
File "/root/inference/PanGu-Alpha-GPU/panguAlpha_pytorch/megatron/fp16/fp16util.py", line 22, in <module>
import amp_C
ImportError: /usr/local/python/lib/python3.8/site-packages/amp_C.cpython-38-x86_64-linux-gnu.so: undefined symbol: _ZN3c105ErrorC2ENS_14SourceLocationENSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE
```