publication . Preprint . Conference object . 2018

Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Abdullah Al-Dujaili; Alex Huang; Erik Hemberg; Una-May OReilly;
Open Access English
  • Published: 09 Jan 2018
Abstract
Malware is constantly adapting in order to avoid detection. Model based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which characterizes the features of malware. A specific extra challenge of malware is that the adversarial examples must be generated in a way that preserves their m...
Subjects
arXiv: Computer Science::Cryptography and Security
ACM Computing Classification System: Software_OPERATINGSYSTEMS
free text keywords: Computer Science - Cryptography and Security, Computer Science - Learning, Statistics - Machine Learning, Adversarial system, Saddle, Binary number, Machine learning, computer.software_genre, computer, Malware, Deep learning, Artificial neural network, Computer security, Artificial intelligence, business.industry, business, Robustness (computer science), Support vector machine, Computer science
30 references, page 1 of 2

[1] LIEF - Library to Instrument Executable Formats - Quarkslab. https: //lief.quarkslab.com/. Accessed: 2018-01-05.

[2] PE Format (Windows). https://msdn.microsoft.com/en-us/library/ windows/desktop/ms680547. Accessed: 2018-01-05.

[3] VirusShare.com. https://virusshare.com/. Accessed: 2018-01-05.

[4] VirusTotal. https://www.virustotal.com. Accessed: 2018-01-05.

[5] Hyrum S Anderson, Anant Kharkar, Bobby Filar, and Phil Roth. Evading machine learning malware detection. 2017.

[6] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. arXiv preprint arXiv:1712.03141, 2017.

[7] Burton H Bloom. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7):422-426, 1970.

[8] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. arXiv preprint arXiv:1705.07263, 2017.

[9] George E Dahl, Jack W Stokes, Li Deng, and Dong Yu. Large-scale malware classification using random projections and neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 3422-3426. IEEE, 2013.

[10] Hung Dang, Yue Huang, and Ee-Chien Chang. Evading classifiers by morphing in the dark. In ACM CCS, volume 17, pages 119-133, 2017.

[11] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.

[12] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435, 2016.

[13] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pages 62-79. Springer, 2017. [OpenAIRE]

[14] Weiwei Hu and Ying Tan. Generating adversarial malware examples for black-box attacks based on GAN. arXiv preprint arXiv:1702.05983, 2017.

[15] Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pages 43-58. ACM, 2011.

30 references, page 1 of 2
Abstract
Malware is constantly adapting in order to avoid detection. Model based malware detectors, such as SVM and neural networks, are vulnerable to so-called adversarial examples which are modest changes to detectable malware that allows the resulting malware to evade detection. Continuous-valued methods that are robust to adversarial examples of images have been developed using saddle-point optimization formulations. We are inspired by them to develop similar methods for the discrete, e.g. binary, domain which characterizes the features of malware. A specific extra challenge of malware is that the adversarial examples must be generated in a way that preserves their m...
Subjects
arXiv: Computer Science::Cryptography and Security
ACM Computing Classification System: Software_OPERATINGSYSTEMS
free text keywords: Computer Science - Cryptography and Security, Computer Science - Learning, Statistics - Machine Learning, Adversarial system, Saddle, Binary number, Machine learning, computer.software_genre, computer, Malware, Deep learning, Artificial neural network, Computer security, Artificial intelligence, business.industry, business, Robustness (computer science), Support vector machine, Computer science
30 references, page 1 of 2

[1] LIEF - Library to Instrument Executable Formats - Quarkslab. https: //lief.quarkslab.com/. Accessed: 2018-01-05.

[2] PE Format (Windows). https://msdn.microsoft.com/en-us/library/ windows/desktop/ms680547. Accessed: 2018-01-05.

[3] VirusShare.com. https://virusshare.com/. Accessed: 2018-01-05.

[4] VirusTotal. https://www.virustotal.com. Accessed: 2018-01-05.

[5] Hyrum S Anderson, Anant Kharkar, Bobby Filar, and Phil Roth. Evading machine learning malware detection. 2017.

[6] Battista Biggio and Fabio Roli. Wild patterns: Ten years after the rise of adversarial machine learning. arXiv preprint arXiv:1712.03141, 2017.

[7] Burton H Bloom. Space/time trade-offs in hash coding with allowable errors. Communications of the ACM, 13(7):422-426, 1970.

[8] Nicholas Carlini and David Wagner. Adversarial examples are not easily detected: Bypassing ten detection methods. arXiv preprint arXiv:1705.07263, 2017.

[9] George E Dahl, Jack W Stokes, Li Deng, and Dong Yu. Large-scale malware classification using random projections and neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on, pages 3422-3426. IEEE, 2013.

[10] Hung Dang, Yue Huang, and Ee-Chien Chang. Evading classifiers by morphing in the dark. In ACM CCS, volume 17, pages 119-133, 2017.

[11] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.

[12] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. Adversarial perturbations against deep neural networks for malware classification. arXiv preprint arXiv:1606.04435, 2016.

[13] Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick McDaniel. Adversarial examples for malware detection. In European Symposium on Research in Computer Security, pages 62-79. Springer, 2017. [OpenAIRE]

[14] Weiwei Hu and Ying Tan. Generating adversarial malware examples for black-box attacks based on GAN. arXiv preprint arXiv:1702.05983, 2017.

[15] Ling Huang, Anthony D Joseph, Blaine Nelson, Benjamin IP Rubinstein, and JD Tygar. Adversarial machine learning. In Proceedings of the 4th ACM workshop on Security and artificial intelligence, pages 43-58. ACM, 2011.

30 references, page 1 of 2
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue
publication . Preprint . Conference object . 2018

Adversarial Deep Learning for Robust Detection of Binary Encoded Malware

Abdullah Al-Dujaili; Alex Huang; Erik Hemberg; Una-May OReilly;