
Although neural networks are near achieving performance similar to humans in many tasks, they are susceptible to adversarial attacks in the form of a small, intentionally designed perturbation, which could lead to misclassifications. The best defense against these attacks, so far, is adversarial training (AT), which improves a model’s robustness by augmenting the training data with adversarial examples. However, AT usually decreases the model’s accuracy on clean samples and could overfit to a specific attack, inhibiting its ability to generalize to new attacks. In this paper, we investigate the usage of domain adaptation to enhance AT’s performance. We propose a novel multiple adversarial domain adaptation (MADA) method, which looks at this problem as a domain adaptation task to discover robust features. Specifically, we use adversarial learning to learn features that are domain-invariant between multiple adversarial domains and the clean domain. We evaluated MADA on MNIST and CIFAR-10 datasets with multiple adversarial attacks during training and testing. The results of our experiments show that MADA is superior to AT on adversarial samples by about 4% on average and on clean samples by about 1% on average.
Artificial neural network, Artificial intelligence, Materials Science, Robustness (evolution), Overfitting, MNIST database, Adversarial Robustness in Deep Learning Models, Adversarial system, Biochemistry, Gene, Artificial Intelligence, Machine learning, Deep neural networks, Materials Chemistry, Nuclear Fuel Development, Domain adaptation, Adaptation (eye), Physics, Optics, Computer science, TK1-9971, Chemistry, Adversarial Examples, Computer Science, Physical Sciences, Electrical engineering. Electronics. Nuclear engineering, Defenses, Classifier (UML)
Artificial neural network, Artificial intelligence, Materials Science, Robustness (evolution), Overfitting, MNIST database, Adversarial Robustness in Deep Learning Models, Adversarial system, Biochemistry, Gene, Artificial Intelligence, Machine learning, Deep neural networks, Materials Chemistry, Nuclear Fuel Development, Domain adaptation, Adaptation (eye), Physics, Optics, Computer science, TK1-9971, Chemistry, Adversarial Examples, Computer Science, Physical Sciences, Electrical engineering. Electronics. Nuclear engineering, Defenses, Classifier (UML)
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
