Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
jinhaibo 6b911bf7e1 | 1 year ago | |
---|---|---|
cfg | 1 year ago | |
data | 1 year ago | |
scripts | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE.txt | 1 year ago | |
README.md | 1 year ago | |
Teaser_Updated.png | 1 year ago | |
alexnet_fc7out.py | 1 year ago | |
create_imagenet_filelist.py | 1 year ago | |
dataset.py | 1 year ago | |
finetune_and_test.py | 1 year ago | |
generate_poison.py | 1 year ago |
Official Implementation of the AAAI-20 paper Hidden Trigger Backdoor Attacks
With the success of deep learning algorithms in various domains, studying adversarial attacks to secure deep models
in real world applications has become an important research topic. Backdoor attacks are a form of adversarial attacks on
deep networks where the attacker provides poisoned data to the victim to train the model with, and then activates the attack by showing a specific small trigger pattern at the test time. Most state-of-the-art backdoor attacks either provide mislabeled poisoning data that is possible to identify by visual
inspection, reveal the trigger in the poisoned data, or use noise to hide the trigger. We propose a novel form of backdoor attack where poisoned data look natural with correct labels and also more importantly, the attacker hides the trigger in the poisoned data and keeps the trigger secret until the test time. We perform an extensive study on various image classification settings and show that our attack can fool the model by
pasting the trigger at random locations on unseen images although the model performs well on clean data. We also show
that our proposed attack cannot be easily defended using a state-of-the-art defense algorithm for backdoor attacks.
python create_imagenet_filelist.py cfg/dataset.cfg
Change ImageNet data source in dataset.cfg
This script partitions the ImageNet train and val data into poison generation, finetune and val to run our backdoor attacks.
Default set to 200 poison generation images, 800 images as finetune and validation images as val.
Change this for your specific needs.
python generate_poison.py cfg/experiment.cfg
python finetune_and_test.py cfg/experiment.cfg
If you find our paper or code useful, please cite us using
@article{saha2019hidden,
title={Hidden Trigger Backdoor Attacks},
author={Saha, Aniruddha and Subramanya, Akshayvarun and Pirsiavash, Hamed},
journal={arXiv preprint arXiv:1910.00033},
year={2019}
}
This work was performed under the following financial assistance award: 60NANB18D279 from U.S. Department of Commerce, National Institute of Standards and Technology, funding from SAP SE, and also NSF grant 1845216.
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》