Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
taoht 3b7215225a | 1 year ago | |
---|---|---|
.. | ||
bpe_4w_pcl | 1 year ago | |
spm_13w | 1 year ago | |
spm_25w | 1 year ago | |
README.md | 1 year ago | |
dataset_download.py | 1 year ago | |
dataset_sample.py | 1 year ago | |
mindrecord_shuffle.py | 1 year ago | |
pre_process_bc.py | 1 year ago |
#导入数据集 一带一路多语言数据集
#测试数据集将下载到当前目录下的"./SAMPLE_DATA"文件夹下,如需改变目录名称请自行修改
import wfio
_INPUT = '{"type":25,"uri":"sample_data/2114/"}'
wfio.load_files(_INPUT, dir_name='./SAMPLE_DATA')
这里以“一带一路多语言数据集”为例(单双语数据格式及容量,详见一带一路多语言数据集):
# 设定抽样文件本地路径和抽样保存路径
data_dir = '/cache/data/'
save_path = '/cache/data_sample/'
# 设定抽样策略, 配置中、英单语文件抽取容量, MB
mono_sample_strategy = {'zh': 1024, 'en': 1024}
# 设定抽样策略, 配置中、英双语文件抽取容量, MB
corpus_sample_strategy = {'zh-en': 1024}
# 单语抽取
!python ./dataset/dataset_sample.py \
--data_path $data_dir \ # 原始数据路径
--output_path $save_path \ # 抽取数据保存路径
--sample_strategy "{mono_sample_strategy}" \ # 数据抽取的策略
--mode 'mono' # 单语抽取模式
# 双语抽取
!python ./dataset/dataset_sample.py \
--data_path $data_dir \ # 原始数据路径
--output_path $save_path \ # 抽取数据保存路径
--sample_strategy "{corpus_sample_strategy}" \ # 数据抽取的策略
--mode 'corpus' # 双语抽取模式
这里以转化为mindRecord数据文件为例:
data_dir = '/cache/data_sample/*.txt'
save_path = '/cache/MindRecord/mPanGu_zh-en_mindrecord'
# 多进程,一个进程绑定一个mindrecord文件写
num_mindrecord = 50 #不要设置的太小,不然进程少,转换较慢
!python ./dataset/pre_process_bc.py \
--data_path "{data_dir}" \
--output_file $save_path \
--num_process $num_mindrecord \
--tokenizer 'spm_13w'
# 文件内打乱
!python ./dataset/mindrecord_shuffle.py \
--input-dir "/cache/MindRecord/" \
--output-dir "/cache/MindRecord_shuffle/"
client 1, client 2数据处理流程一致
#导入数据集 一带一路多语言数据集
#测试数据集将下载到当前目录下的"./SAMPLE_DATA"文件夹下,如需改变目录名称请自行修改
import wfio
_INPUT = '{"type":25,"uri":"sample_data/2114/"}'
wfio.load_files(_INPUT, dir_name='./SAMPLE_DATA')
# 设定抽样文件本地路径和抽样保存路径
data_dir = '/cache/data/'
save_path = '/cache/data_sample/'
# 设定抽样策略, 配置中、英单语文件抽取容量, MB
mono_sample_strategy = {'zh': 1024, 'en': 1024}
# 设定抽样策略, 配置中、英双语文件抽取容量, MB
corpus_sample_strategy = {'zh-en': 1024}
# 单语抽取
!python ./dataset/dataset_sample.py --data_path $data_dir --output_path $save_path --sample_strategy "{mono_sample_strategy}" --mode 'mono'
# 双语抽取
!python ./dataset/dataset_sample.py --data_path $data_dir --output_path $save_path --sample_strategy "{corpus_sample_strategy}" --mode 'corpus'
data_dir = '/cache/data_sample/*.txt'
save_path = '/cache/MindRecord/mPanGu_zh-en_mindrecord'
# 多进程,一个进程绑定一个mindrecord文件写
num_mindrecord = 50 #不要设置的太小,不然进程少,转换较慢
!python ./dataset/pre_process_bc.py --data_path "{data_dir}" --output_file $save_path --num_process $num_mindrecord --tokenizer 'spm_13w'
# 文件内打乱
!python ./dataset/mindrecord_shuffle.py --input-dir "/cache/MindRecord/" --output-dir "/cache/MindRecord_shuffle/"
鹏城实验室-智能部-高效能云计算所-分布式计算研究室
[Apache License 2.0]
mPanGu-α-53是首个以中文为中心的多语言&机器翻译模型,在一带一路沿线66个国家53种语种上进行预训练和单双语混合增量训练,单模型支持一带一路53个语种任两语种间的互译,对比WMT2021多语言任务赛道No.1在”中外“100个方向上平均BLEU值提升0.354,支持在NPU/GPU上基于MindSpore分布式训练(最少8卡)、推理(全精度/FP16,1卡)和多语言任务的迁移学习。
Text Python
Apache-2.0
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》