问答任务:模型在基于问答数据集的微调后,输入为上下文(context)和问题(question),模型根据上下文(context)给出相应的回答。
相关论文
model | type | datasets | EM | F1 | stage | example |
---|---|---|---|---|---|---|
q'a | qa_bert_case_uncased_squad | SQuAD v1.1 | 80.74 | 88.33 | finetune eval predict |
link link link |
数据集目录格式
└─squad
├─train-v1.1.json
└─dev-v1.1.json
需开发者提前clone工程。
请参考使用脚本启动
在脚本执行目录创建 squad
文件夹,然后将数据集放入其中
脚本运行测试
# finetune
python run_mindformer.py --config ./configs/qa/run_qa_bert_base_uncased.yaml --run_mode finetune --load_checkpoint qa_bert_base_uncased
# evaluate
python run_mindformer.py --config ./configs/qa/run_qa_bert_base_uncased.yaml --run_mode eval --load_checkpoint qa_bert_base_uncased_squad
# predict
python run_mindformer.py --config ./configs/qa/run_qa_bert_base_uncased.yaml --run_mode predict --load_checkpoint qa_bert_base_uncased_squad --predict_data [TEXT]
import mindspore; mindspore.set_context(mode=0, device_id=0)
from mindformers.trainer import Trainer
# 初始化trainer
trainer = Trainer(task='question_answering',
model='qa_bert_base_uncased',
train_dataset='./squad/',
eval_dataset='./squad/')
#方式1:使用现有的预训练权重进行finetune, 并使用finetune获得的权重进行eval和推理
trainer.train(resume_or_finetune_from_checkpoint="qa_bert_base_uncased",
do_finetune=True)
trainer.evaluate(eval_checkpoint=True)
# 测试数据,测试数据分为context和question两部分,两者以 “-” 分隔
input_data = ["My name is Wolfgang and I live in Berlin - Where do I live?"]
trainer.predict(predict_checkpoint=True, input_data=input_data)
# 方式2: 从obs下载训练好的权重并进行eval和推理
trainer.evaluate()
# INFO - QA Metric = {'QA Metric': {'exact_match': 80.74739829706716, 'f1': 88.33552874684968}}
# 测试数据,测试数据分为context和question两部分,两者以 “-” 分隔
input_data = ["My name is Wolfgang and I live in Berlin - Where do I live?"]
trainer.predict(input_data=input_data)
# INFO - output result is [{'text': 'Berlin', 'score': 0.9941, 'start': 34, 'end': 40}]
from mindformers.pipeline import QuestionAnsweringPipeline
from mindformers import AutoTokenizer, BertForQuestionAnswering, AutoConfig
# 测试数据,测试数据分为context和question两部分,两者以 “-” 分隔
input_data = ["My name is Wolfgang and I live in Berlin - Where do I live?"]
tokenizer = AutoTokenizer.from_pretrained('qa_bert_base_uncased_squad')
qa_squad_config = AutoConfig.from_pretrained('qa_bert_base_uncased_squad')
# This is a known issue, you need to specify batch size equal to 1 when creating bert model.
qa_squad_config.batch_size = 1
model = BertForQuestionAnswering(qa_squad_config)
qa_pipeline = QuestionAnsweringPipeline(task='question_answering',
model=model,
tokenizer=tokenizer)
results = qa_pipeline(input_data)
print(results)
# 输出
# [{'text': 'Berlin', 'score': 0.9941, 'start': 34, 'end': 40}]
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》