
handle: 11365/1284834 , 11577/3506414
In the past decades, the rise of artificial intelligence has given us the capabilities to solve the most challenging problems in our day-to-day lives, such as cancer prediction and autonomous navigation. However, these applications might not be reliable if not secured against adversarial attacks. In addition, recent works demonstrated that some adversarial examples are transferable across different models. Therefore, it is crucial to avoid such transferability via robust models that resist adversarial manipulations. In this paper, we propose a feature randomization-based approach that resists eight adversarial attacks targeting deep learning models in the testing phase. Our novel approach consists of changing the training strategy in the target network classifier and selecting random feature samples. We consider the attacker with a Limited-Knowledge and Semi-Knowledge conditions to undertake the most prevalent types of adversarial attacks. We evaluate the robustness of our approach using the well-known UNSW-NB15 datasets that include realistic and synthetic attacks. Afterward, we demonstrate that our strategy outperforms the existing state-of-the-art approach, such as the Most Powerful Attack, which consists of fine-tuning the network model against specific adversarial attacks. Finally, our experimental results show that our methodology can secure the target network and resists adversarial attack transferability by over 60%.
Networking and Internet Architecture (cs.NI), FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Cryptography and Security, cybersecurity, machine and deep learning, Adversarial attacks, convolutional neural network, adversarial machine learning, Adversarial attacks, adversarial learning, adversarial machine learning, convolutional neural network, cybersecurity, machine and deep learning, network security, name=SDG 3 - Good Health and Well-being, 004, Machine Learning (cs.LG), Computer Science - Networking and Internet Architecture, Computer Science - Computers and Society, Computers and Society (cs.CY), network security, /dk/atira/pure/sustainabledevelopmentgoals/good_health_and_well_being, Cryptography and Security (cs.CR), adversarial learning
Networking and Internet Architecture (cs.NI), FOS: Computer and information sciences, Computer Science - Machine Learning, Computer Science - Cryptography and Security, cybersecurity, machine and deep learning, Adversarial attacks, convolutional neural network, adversarial machine learning, Adversarial attacks, adversarial learning, adversarial machine learning, convolutional neural network, cybersecurity, machine and deep learning, network security, name=SDG 3 - Good Health and Well-being, 004, Machine Learning (cs.LG), Computer Science - Networking and Internet Architecture, Computer Science - Computers and Society, Computers and Society (cs.CY), network security, /dk/atira/pure/sustainabledevelopmentgoals/good_health_and_well_being, Cryptography and Security (cs.CR), adversarial learning
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 7 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Top 10% |
