publication . Preprint . 2018

BEBP: An Poisoning Method Against Machine Learning Based IDSs

Li, Pan; Liu, Qiang; Zhao, Wentao; Wang, Dongxu; Wang, Siqi;
Open Access English
  • Published: 11 Mar 2018
Abstract
In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). However, practical IDSs generally update their decision module by feeding new data then retraining learning models in a periodical way. Hence, some attacks that comprise the data for training or testing classifiers significantly challenge the detecting capability of machine learning-based IDSs. Poisoning attack, which is one of the most recognized security threats towards machine learning-based IDSs, injects some adversarial samples into the training phase, inducing data drifting of training data and a significant performance decrease of target IDSs over tes...
Subjects
free text keywords: Statistics - Machine Learning, Computer Science - Learning, Computer Science - Cryptography and Security
Download from
20 references, page 1 of 2

[1] M. A. Ambusaidi, X. He, P. Nanda, and Z. Tan, “Building an intrusion detection system using a filter-based feature selection algorithm,” IEEE Trans. Comput., vol. 65, no. 10, pp. 2986-2998, 2016.

[2] K. Kishimoto, H. Yamaki, and H. Takakura, “Improving performance of anomaly-based ids by combining multiple classifiers,” in Proc. of the SAINT'11, 2011, pp. 366-371.

[3] B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, Misleading Learners: Co-opting Your Spam Filter, ser. Machine Learning in Cyber Trust. Springer, Boston, MA, 2009.

[4] B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, and F. Roli, “Poisoning behavioral malware clustering,” in Proc. of the AISec'14. New York, NY, USA: ACM, 2014, pp. 27-36.

[5] W. Hu and Y. Tan, “Generating adversarial malware examples for black-box attacks based on gan,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1702.05983

[6] M. Kloft and P. Laskov, “Online anomaly detection under adversarial impact,” in Proc. of the AISTATS'10, 2010, pp. 405-412.

[7] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, S. Rao, N. Taft, and J. D. Tygar, “Antidote: Understanding and defending against poisoning of anomaly detectors,” in Proc. of the IMC'09. New York, NY, USA: ACM, 2009, pp. 1-14.

[8] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in Proc. of the ASIACCS'06. New York, NY, USA: ACM, 2006, pp. 16-25.

[9] W. Xu, Y. Qi, and D. Evans, “Automatically evading classifiers: A case study on pdf malware classifiers,” in Proc. of the NDSS'16, 2016, pp. 1-15.

[10] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proc. of the ASIACCS'17. New York, NY, USA: ACM, 2017, pp. 506-519.

[11] S. M. Moosavidezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in Proc. of the CVPR'16, 2016, pp. 2574-2582.

[12] B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in Proc. of the ICML'12, 2012, pp. 1467-1474.

[13] C. Yang, Q. Wu, H. Li, and Y. Chen, “Generative poisoning attack method against neural networks,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1703.01340

[14] M. Zhao, B. An, W. Gao, and T. Zhang, “Efficient label contamination attacks against black-box learning models,” in Proc. of the IJCAI'17, 2017, pp. 3945-3951.

[15] I. Rosenberg, A. Shabtai, L. Rokach, and Y. Elovici, “Generic black-box end-to-end attack against rnns and other api calls based malware classifiers,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1707.05970 [OpenAIRE]

20 references, page 1 of 2
Powered by OpenAIRE Open Research Graph
Any information missing or wrong?Report an Issue