Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
alpha_magic 0e80f1bd2f | 1 year ago | |
---|---|---|
DataProcess | 1 year ago | |
Inference | 1 year ago | |
Pic | 1 year ago | |
RobustART | 1 year ago | |
exprs | 1 year ago | |
prototype | 1 year ago | |
results | 1 year ago | |
README.md | 1 year ago | |
Wind-Solution.zip | 1 year ago |
Hi, we are the team named WindPassStreet and this is our solution for Track1 in AISafety CVPR2022 Challenge. We provide a simple and effective way to reach a
robust model which can achieve high accuracy in the challenge.
In this repository, we also provide our final submitted model weights for verification, due to the randomness of the attack sampled,
the score based on the final model weight can range from 88.12~89.08 (seems to be a big deviation). Maybe some kind of unlucky, our final submission
happened to be at a very low point, and now we are ranked at 5th. Thus, please put aside the ranking for a moment, and just focus on our solution.
We also provide some advice for alleviating randomness at last.
If you encounter any question, please feel free to contact us. You can create an issue or just send email to me 984958836@qq.com. Also welcome for any idea exchange and discussion.
Please refer to here for the details about the challenge, and our solution focuses on Track 1(Classification Task Defense)
This code is based on RobustART, the original RobustART is for DDP training, we re-change the code
and now it can support standard training, and also can both support Windows and Linux.
During the competition, we also get much help from the free GPU resource from OpenI. For easily implementing our code, we also share the environment Image
on OpenI, you can reach the environment here by searching the keyword SWA
.
Before run our code, please first transform the label format. You can use label&Split.py
in DataProcess
directory
for the transformation. The final format should be like the example exampleForPhaseII.txt
in DataProcess
.
In the challenge, we use a simple and straightforward strategy to resist the adversarial
attack, while keeping high accuracy on the clean images.
Our strategy is DataAugmentation + SwinTransformer-Tiny + High Softmax Temperature
. The code is based on RobustART, and the following
is the details:
\prototype\prototype\data\custom_dataloader.py
, and the strategy AUGMIXMORECUSTOMAUTOAUG
Augmix
, some Blur
, some Noise
, some ColorChange
andAUTOAUG
strategy which is originally provided bySwinTransformer-Tiny
as our classifier, which is superior thansoftmax temperature
of the softmax layer in the inference phase. It is a very Simple and Effective Operation for the adversarialFirst, please refer to SwinTransformer to train a SwinTransformer-Tiny in a standard way, as
the challenge forbids the use of pretrained model. Also, the train code for Swin-Transformer in RobustART seems failed to
reach a very good result, so we advise to train by the official implementation code (1 or more points boost).
Then, you can train based on the model got from the first step, just run:
python prototype/prototype/solver/cls_solver.py
Also you can change the settings according to your needs and interests in exprs\nips_benchmark\pgd_adv_train\vit_base_patch16_224\customconfigSwin.yaml
You can find our inference code in Inference
directory, where we use the high softmax temperature operation.
We also provide our final result here for verification. Also should note that, as the white-box attack implemented
by the organizer is sampling according to certain parameters and the current final result is according to just one test,
the final result maybe biased. Here, we advise to attack with 5 or more random start for the robust evaluation, which can remove
some random biases.
No Description
Text Python Jupyter Notebook Shell other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》