
Deep learning models have achieved groundbreaking results in computer vision; however, their vulnerability to adversarial examples persists. Adversarial examples, generated by adding minute perturbations to images, lead to misclassification and pose serious threats to real-world applications of deep learning models. This paper proposes a simple, powerful, and efficient adversarial defense method: a Siamese network-based Denoising Autoencoder (Siamese-DAE). This method addresses the reduction in classification accuracy caused by the denoising process. Experiments on Chest X-ray, Brain MRI, Retina, and Skin images, using FGSM, PGD, DeepFool, CW, SPSA, and AutoAttack adversarial algorithms, demonstrate that the Siamese-DAE, trained to remove noise, effectively eliminates perturbations, leading to improved classification accuracy compared not only to the standard classification model but also to relevant denoising defense models.
Adversarial examples, denoising autoencoder, deep learning, Siamese network, Electrical engineering. Electronics. Nuclear engineering, TK1-9971
Adversarial examples, denoising autoencoder, deep learning, Siamese network, Electrical engineering. Electronics. Nuclear engineering, TK1-9971
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
