Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
realai 7429c43735 | 2 years ago | |
---|---|---|
.. | ||
downsampled_train | 2 years ago | |
robustness | 2 years ago | |
uncertainty | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 2 years ago | |
table_adv.png | 2 years ago |
This repository contains the essential code for the paper Using Pre-Training Can Improve Model Robustness and Uncertainty, ICML 2019.
Requires Python 3+ and PyTorch 0.4.1+.
Kaiming He et al. (2018) have called into question the utility of pre-training by showing that training from scratch can often yield similar performance, should the model train long enough. We show that although pre-training may not improve performance on traditional classification metrics, it does provide large benefits to model robustness and uncertainty. With pre-training, we show approximately a 30% relative improvement in label noise robustness and a 10% absolute improvement in adversarial robustness on CIFAR-10 and CIFAR-100. Pre-training also improves model calibration. In some cases, using pre-training without task-specific methods surpasses the state-of-the-art, highlighting the importance of using pre-training when evaluating future methods on robustness and uncertainty tasks.
If you find this useful in your research, please consider citing:
@article{hendrycks2019pretraining,
title={Using Pre-Training Can Improve Model Robustness and Uncertainty},
author={Hendrycks, Dan and Lee, Kimin and Mazeika, Mantas},
journal={Proceedings of the International Conference on Machine Learning},
year={2019}
}
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.
https://ml.cs.tsinghua.edu.cn/aml/home
Text Python Jupyter Notebook Shell INI other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》