
arXiv: 2010.07230
Capsule networks are a type of neural network that use the spatial relationship between features to classify images. By capturing the poses and relative positions between features, this network is better able to recognize affine transformation and surpass traditional convolutional neural networks (CNNs) when handling translation, rotation, and scaling. The stacked capsule autoencoder (SCAE) is a state-of-the-art capsule network that encodes an image in capsules which each contain poses of features and their correlations. The encoded contents are then input into the downstream classifier to predict the image categories. Existing research has mainly focused on the security of capsule networks with dynamic routing or expectation maximization (EM) routing, while little attention has been given to the security and robustness of SCAEs. In this paper, we propose an evasion attack against SCAEs. After a perturbation is generated based on the output of the object capsules in the model, it is added to an image to reduce the contribution of the object capsules related to the original category of the image so that the perturbed image will be misclassified. We evaluate the attack using an image classification experiment on the Mixed National Institute of Standards and Technology Database (MNIST), Fashion-MNIST, and German Traffic Sign Recognition Benchmark (GTSRB) datasets, and the average attack success rate can reach 98.6%. The experimental results indicate that the attack can achieve high success rates and stealthiness. This finding confirms that the SCAE has a security vulnerability that allows for the generation of adversarial samples. Our work seeks to highlight the threat of this attack and focus attention on SCAE’s security.
FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Cryptography and Security, Industrial engineering. Management engineering, adversarial perturbation, QA75.5-76.95, T55.4-60.8, stacked capsule autoencoder, Machine Learning (cs.LG), machine learning, machine learning; adversarial perturbation; evasion attack; stacked capsule autoencoder, Electronic computers. Computer science, evasion attack, Cryptography and Security (cs.CR)
FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Cryptography and Security, Industrial engineering. Management engineering, adversarial perturbation, QA75.5-76.95, T55.4-60.8, stacked capsule autoencoder, Machine Learning (cs.LG), machine learning, machine learning; adversarial perturbation; evasion attack; stacked capsule autoencoder, Electronic computers. Computer science, evasion attack, Cryptography and Security (cs.CR)
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 1 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
