Views provided by UsageCounts
{"references": ["Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. 2017", "Wieland Brendel, Jonas Rauber, Matthias K\u00fcmmerer, Ivan Ustyuzhaninov, and Matthias Bethge. Accurate, reliable and fast robustness evaluation. Advances in Neural Information Processing Systems, 32, 2019.", "Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39\u201357. IEEE, 2017.", "Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574\u20132582, 2016.", "Ian Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In International Conference on Learning Representations, 2015.", "Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards deep learning models resistant to adversarial attacks. In International Conference on Learning Representations, 2018."]}
The UG100 dataset contains the adversarial attack results of seven \(L_\infty\) approximate attacks (+ MIP) on the MNIST and CIFAR10 datasets. Specifically, it contains ~2.3k adversarial examples generated by the following attacks: Basic Iterative Method ("bim") Brendel & Bethge Attack ("brendel") Carlini & Wagner Attack ("carlini") Deepfool ("deepfool") Fast Gradient Sign Method ("fast_gradient") Projected Gradient Descent ("pgd") Uniform noise ("uniform") MIPVerify ("mip") It also includes adversarial distances (for all attacks) and bounds (for MIP), as well as MIP convergence times. Applications of this dataset include: Studying how, when and why adversarial attacks are close-to-optimal; Training classifiers that are robust to adversarial noise; Benchmarking new adversarial attacks. The companion code for this dataset is available here.
Please cite this dataset as: Samuele Marro and Michele Lombardi. Asymmetries in Adversarial Settings. 2022. We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support. We also thank Rebecca Montanari and Andrea Borghesi for their advice and support.
mnist, adversarial attack, cifar10, mip
mnist, adversarial attack, cifar10, mip
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 5 |

Views provided by UsageCounts