publication . Preprint . 2020

A Closer Look at Accuracy vs. Robustness

Yang, Yao-Yuan; Rashtchian, Cyrus; Zhang, Hongyang; Salakhutdinov, Ruslan; Chaudhuri, Kamalika;
Open Access English
  • Published: 05 Mar 2020
Abstract
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of curren...
Subjects
free text keywords: Computer Science - Machine Learning, Computer Science - Cryptography and Security, Statistics - Machine Learning
Funded by
NSF| SaTC: CORE: Frontier: Collaborative: End-to-End Trustworthiness of Machine-Learning Systems
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1804829
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Computer and Network Systems
,
NSF| CCF: CIF: Small: Interactive Learning from Noisy, Heterogeneous Feedback
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1719133
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Computing and Communication Foundations
,
NSF| AF: RI: Medium: Collaborative Research: Understanding and Improving Optimization in Deep and Recurrent Networks
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1763562
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
Download from
39 references, page 1 of 3

Anil, C., Lucas, J., and Grosse, R. Sorting out lipschitz function approximation. In International Conference on Machine Learning, 2019.

Blum, A., Dick, T., Manoj, N., and Zhang, H. Random smoothing might be unable to certify `inf ty robustness for high-dimensional images. arXiv preprint arXiv:2002.03517, 2020. [OpenAIRE]

Bubeck, S., Price, E., and Razenshteyn, I. Adversarial examples from computational constraints. arXiv preprint arXiv:1805.10204, 2018. [OpenAIRE]

Carlini, N. and Wagner, D. Towards evaluating the robustness of neural networks. IEEE Symposium on Security and Privacy, 2017.

Cohen, J. M., Rosenfeld, E., and Kolter, J. Z. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019.

Finlay, C., Calder, J., Abbasi, B., and Oberman, A. Lipschitz regularized deep neural networks generalize and are adversarially robust. arXiv preprint arXiv:1808.09540, 2018.

Gao, Y., Rosenberg, H., Fawaz, K., Jha, S., and Hsu, J. Analyzing accuracy loss in randomized smoothing defenses. arXiv preprint arXiv:2003.01595, 2020.

Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.

Gowal, S., Uesato, J., Qin, C., Huang, P.-S., Mann, T., and Kohli, P. An alternative surrogate loss for PGD-based adversarial testing. arXiv preprint arXiv:1910.09338, 2019. [OpenAIRE]

He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.

Huster, T., Chiang, C.-Y. J., and Chadha, R. Limitations of the lipschitz constant as a defense against adversarial examples. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 16-29, 2018. [OpenAIRE]

Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

Krizhevsky, A. et al. Learning multiple layers of features from tiny images. 2009.

Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial machine learning at scale. In International Conference on Learning Representations, 2017.

Li, B., Chen, C., Wang, W., and Carin, L. Certified adversarial robustness with additive gaussian noise. arXiv preprint arXiv:1809.03113, 2018.

39 references, page 1 of 3
Abstract
Current methods for training robust networks lead to a drop in test accuracy, which has led prior works to posit that a robustness-accuracy tradeoff may be inevitable in deep learning. We take a closer look at this phenomenon and first show that real image datasets are actually separated. With this property in mind, we then prove that robustness and accuracy should both be achievable for benchmark datasets through locally Lipschitz functions, and hence, there should be no inherent tradeoff between robustness and accuracy. Through extensive experiments with robustness methods, we argue that the gap between theory and practice arises from two limitations of curren...
Subjects
free text keywords: Computer Science - Machine Learning, Computer Science - Cryptography and Security, Statistics - Machine Learning
Funded by
NSF| SaTC: CORE: Frontier: Collaborative: End-to-End Trustworthiness of Machine-Learning Systems
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1804829
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Computer and Network Systems
,
NSF| CCF: CIF: Small: Interactive Learning from Noisy, Heterogeneous Feedback
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1719133
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Computing and Communication Foundations
,
NSF| AF: RI: Medium: Collaborative Research: Understanding and Improving Optimization in Deep and Recurrent Networks
Project
  • Funder: National Science Foundation (NSF)
  • Project Code: 1763562
  • Funding stream: Directorate for Computer & Information Science & Engineering | Division of Information and Intelligent Systems
Download from
39 references, page 1 of 3

Anil, C., Lucas, J., and Grosse, R. Sorting out lipschitz function approximation. In International Conference on Machine Learning, 2019.

Blum, A., Dick, T., Manoj, N., and Zhang, H. Random smoothing might be unable to certify `inf ty robustness for high-dimensional images. arXiv preprint arXiv:2002.03517, 2020. [OpenAIRE]

Bubeck, S., Price, E., and Razenshteyn, I. Adversarial examples from computational constraints. arXiv preprint arXiv:1805.10204, 2018. [OpenAIRE]

Carlini, N. and Wagner, D. Towards evaluating the robustness of neural networks. IEEE Symposium on Security and Privacy, 2017.

Cohen, J. M., Rosenfeld, E., and Kolter, J. Z. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019.

Finlay, C., Calder, J., Abbasi, B., and Oberman, A. Lipschitz regularized deep neural networks generalize and are adversarially robust. arXiv preprint arXiv:1808.09540, 2018.

Gao, Y., Rosenberg, H., Fawaz, K., Jha, S., and Hsu, J. Analyzing accuracy loss in randomized smoothing defenses. arXiv preprint arXiv:2003.01595, 2020.

Goodfellow, I. J., Shlens, J., and Szegedy, C. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.

Gowal, S., Uesato, J., Qin, C., Huang, P.-S., Mann, T., and Kohli, P. An alternative surrogate loss for PGD-based adversarial testing. arXiv preprint arXiv:1910.09338, 2019. [OpenAIRE]

He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.

Huster, T., Chiang, C.-Y. J., and Chadha, R. Limitations of the lipschitz constant as a defense against adversarial examples. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 16-29, 2018. [OpenAIRE]

Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.

Krizhevsky, A. et al. Learning multiple layers of features from tiny images. 2009.

Kurakin, A., Goodfellow, I., and Bengio, S. Adversarial machine learning at scale. In International Conference on Learning Representations, 2017.

Li, B., Chen, C., Wang, W., and Carin, L. Certified adversarial robustness with additive gaussian noise. arXiv preprint arXiv:1809.03113, 2018.

39 references, page 1 of 3
Powered by OpenAIRE Research Graph
Any information missing or wrong?Report an Issue