Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
caihua.kong 8d82a40b2f | 2 years ago | |
---|---|---|
.. | ||
attack_benchmark | 2 years ago | |
data | 2 years ago | |
easyrobust | 2 years ago | |
pytorch_ares | 2 years ago | |
test | 2 years ago | |
third_party | 2 years ago | |
.gitignore | 2 years ago | |
README.md | 2 years ago | |
requirements.txt | 2 years ago |
This repository contains the code for ARES (Adversarial Robustness Evaluation for Safety),
a Python library for adversarial machine learning research focusing on benchmarking adversarial
robustness on image classification correctly and comprehensively.
git clone https://github.com/thu-ml/ares/tree/main/pytorch_ares
pip install -r requirements.txt
The requirements.txt includes its dependencies.
pytorch_ares/
data/
: The code supports cifar10 and imagenet datasets.test/
: Some toyexamples for testing adversarial attack methods and adversarial defense methods.pytorch_ares/
dataset_torch/
: Data processing for cifar10 and imagenet datasets.attack_torch/
: PyTorch implementation of some adversarial attack methods.cifar10_model/
: PyTorch implementation of some adversarial defense models on the cifar10 dataset.defense_torch/
: PyTorch implementation of some defense methods.third_party/
: Other open source repositories.attack_benchmark/
: Adversarial robustness benchmarks for image classification.TRADES: Theoretically Principled Trade-off between Robustness and Accuracy
FS-AT: Defense Against Adversarial Attacks Using Feature Scattering-based Adversarial Training
Pre-Training: Using Pre-Training Can Improve Model Robustness and Uncertainty
AT-HE: Boosting Adversarial Training with Hypersphere Embedding
Robust Overfitting: Overfitting in adversarially robust deep learning
FastAT: Fast is better than free: Revisiting adversarial training
AWP: Adversarial Weight Perturbation Helps Robust Generalization
Label Smoothing: Bag of Tricks for Adversarial Training
ARES provides command line interface to run benchmarks. For example, you can test the attack success rate of fgsm on resnet18 on the cifar10 dataset:
cd test/
python test_white_box_attack.py --attack_name fgsm --dataset_name cifar10
There are 4 run_***.py files in the attack_benchmark folder that evaluate the adversarial robustness benchmarks on the cifar10 and imagenet datasets. For example, if you want to evaluate the robustness of the defense model on the cifar10 dataset, you can run the following command line:
cd attack_benchmark/
python run_cifar10_defense.py
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
https://ml.cs.tsinghua.edu.cn/aml/home
Text Python Jupyter Notebook Shell INI other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》