Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
realai 7429c43735 | 2 years ago | |
---|---|---|
.. | ||
models | 2 years ago | |
README.md | 2 years ago | |
attack_methods.py | 2 years ago | |
fs_eval.py | 2 years ago | |
fs_eval.sh | 2 years ago | |
fs_main.py | 2 years ago | |
fs_train.sh | 2 years ago | |
ot.py | 2 years ago | |
setup.py | 2 years ago | |
utils.py | 2 years ago |
This is the implementation of the
"Feature-Scattering Adversarial Training", which is a training method for improving model robustness against adversarial attacks. It advocates the usage of an unsupervised feature-scattering procedure for adversarial perturbation generation, which is effective for overcoming label leaking and improving model robustness.
More information can be found on the project page: https://sites.google.com/site/hczhang1/projects/feature_scattering
The training environment (PyTorch and dependencies) can be installed as follows:
git clone https://github.com/Haichao-Zhang/FeatureScatter.git
cd FeatureScatter
python3 -m venv .venv
source .venv/bin/activate
python3 setup.py install
(or pip install -e .)
Tested under Python 3.5.2 and PyTorch 1.2.0.
Specify the path for saving the trained models in fs_train.sh
, and then run
sh ./fs_train.sh
Specify the path to the trained models to be evaluated in fs_eval.sh
and then run
sh ./fs_eval.sh
A reference model trained on CIFAR10 is here.
If you find this work is useful, please cite the following:
@inproceedings{feature_scatter,
author = {Haichao Zhang and Jianyu Wang},
title = {Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training},
booktitle = {Advances in Neural Information Processing Systems},
year = {2019}
}
For questions related to feature-scattering, please send me an email: hczhang1@gmail.com
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
https://ml.cs.tsinghua.edu.cn/aml/home
Text Python Jupyter Notebook Shell INI other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》