publication . Preprint . 2018

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

Bao, Ruying; Liang, Sihang; Wang, Qingcan;
Open Access English
  • Published: 20 May 2018
Abstract
Deep neural networks have been demonstrated to be vulnerable to adversarial attacks, where small perturbations intentionally added to the original inputs can fool the classifier. In this paper, we propose a defense method, Featurized Bidirectional Generative Adversarial Networks (FBGAN), to extract the semantic features of the input and filter the non-semantic perturbation. FBGAN is pre-trained on the clean dataset in an unsupervised manner, adversarially learning a bidirectional mapping between the high-dimensional data space and the low-dimensional semantic space; also mutual information is applied to disentangle the semantically meaningful features. After the...
Subjects
free text keywords: Computer Science - Machine Learning, Computer Science - Cryptography and Security, Computer Science - Computer Vision and Pattern Recognition, Statistics - Machine Learning
Download from
18 references, page 1 of 2

Anish Athalye, Nicholas Carlini, and David Wagner (2018). “Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples”. In: arXiv preprint arXiv:1802.00420.

Jacob Buckman et al. (2018). “Thermometer encoding: One hot way to resist adversarial examples”. In: Submissions to International Conference on Learning Representations.

Xi Chen et al. (2016). “Infogan: Interpretable representation learning by information maximizing generative adversarial nets”. In: Advances in Neural Information Processing Systems, pp. 2172- 2180.

Jeff Donahue, Philipp Krähenbühl, and Trevor Darrell (2016). “Adversarial feature learning”. In: arXiv preprint arXiv:1605.09782.

Vincent Dumoulin et al. (2016). “Adversarially learned inference”. In: arXiv preprint arXiv:1606.00704.

Ian Goodfellow, Jonathon Shlens, and Christian Szegedy (2014a). “Explaining and harnessing adversarial examples”. In: arXiv preprint arXiv:1412.6572.

Ian Goodfellow et al. (2014b). “Generative adversarial nets”. In: Advances in neural information processing systems, pp. 2672-2680.

Andrew Ilyas et al. (2017). “The Robust Manifold Defense: Adversarial Training using Generative Models”. In: arXiv preprint arXiv:1712.09196.

Abhishek Kumar, Prasanna Sattigeri, and Tom Fletcher (2017). “Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference”. In: Advances in Neural Information Processing Systems, pp. 5540-5550. [OpenAIRE]

Yann LeCun et al. (1998). “Gradient-based learning applied to document recognition”. In: Proceedings of the IEEE 86.11, pp. 2278-2324.

Aleksander Madry et al. (2017). “Towards deep learning models resistant to adversarial attacks”. In: arXiv preprint arXiv:1706.06083.

Yuval Netzer et al. (2011). “Reading digits in natural images with unsupervised feature learning”. In: NIPS workshop on deep learning and unsupervised feature learning. Vol. 2011. 2, p. 5.

Nicolas Papernot et al. (2016). “cleverhans v2.0.0: an adversarial machine learning library”. In: arXiv preprint arXiv:1610.00768.

Tim Salimans et al. (2016). “Improved techniques for training gans”. In: Advances in Neural Information Processing Systems, pp. 2234-2242.

Pouya Samangouei, Maya Kabkab, and Rama Chellappa (2018). “Defense-GAN: Protecting classifiers against adversarial attacks using generative models”. In: International Conference on Learning Representations. Vol. 9. [OpenAIRE]

18 references, page 1 of 2
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue