This is the implementation of [ADBlur] in Pytorch. I was strongly inspired by Hugging Face's code and I referred a lot to their codes
Python > 3.6, fire, tqdm, tensorboardx,
tensorflow (for loading checkpoint file)
This contains 9 python files.
tokenization.py
: Tokenizers adopted from the original Google BERT's codecheckpoint.py
: Functions to load a model from tensorflow's checkpoint filemodels.py
: Model classes for a general transformeroptim.py
: A custom optimizer (BertAdam class) adopted from Hugging Face's codetrain.py
: A helper class for training and evaluationutils.py
: Several utility functionspretrain.py
: An example code for pre-training transformerADBlur.py & ADBlur_RoNERTa
: An example code for ADBlur frameworkclassify.py
: An example code for fine-tuning using pre-trained transformerDownload preprocessed datasets from MNLI & QQP & OOD evaluations and the parameters for
pre-trained BERT and RoBERTa
before fine-tuning.
python classify.py \
--train_cfg='config/train_mrpc.json',
--model_cfg='config/bert_base.json',
--train_file='glue_data/MNLI/train_aug.tsv',
--dev_file='glue_data/MNLI/ood_aug.tsv',
--iid_dev_file='glue_data/MNLI/dev_matched_aug.tsv',
--ood_ent_file='glue_data/MNLI/hans_eval_aug.tsv',
--ood_nent_file='glue_data/MNLI/hans_nen_eval_aug.tsv',
--model_file='save_model/model_steps_98176.pt',
--pretrain_file='uncased_L-12_H-768_A-12/bert_model.ckpt',
--data_parallel=True,
--vocab='uncased_L-12_H-768_A-12/vocab.txt',
--save_dir='save_model/bert-base/mnli',
--max_len =128,
--mode='train'):
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》