BEBP: An Poisoning Method Against Machine Learning Based IDSs

Preprint English OPEN
Li, Pan ; Liu, Qiang ; Zhao, Wentao ; Wang, Dongxu ; Wang, Siqi (2018)
  • Subject: Statistics - Machine Learning | Computer Science - Learning | Computer Science - Cryptography and Security

In big data era, machine learning is one of fundamental techniques in intrusion detection systems (IDSs). However, practical IDSs generally update their decision module by feeding new data then retraining learning models in a periodical way. Hence, some attacks that comprise the data for training or testing classifiers significantly challenge the detecting capability of machine learning-based IDSs. Poisoning attack, which is one of the most recognized security threats towards machine learning-based IDSs, injects some adversarial samples into the training phase, inducing data drifting of training data and a significant performance decrease of target IDSs over testing data. In this paper, we adopt the Edge Pattern Detection (EPD) algorithm to design a novel poisoning method that attack against several machine learning algorithms used in IDSs. Specifically, we propose a boundary pattern detection algorithm to efficiently generate the points that are near to abnormal data but considered to be normal ones by current classifiers. Then, we introduce a Batch-EPD Boundary Pattern (BEBP) detection algorithm to overcome the limitation of the number of edge pattern points generated by EPD and to obtain more useful adversarial samples. Based on BEBP, we further present a moderate but effective poisoning method called chronic poisoning attack. Extensive experiments on synthetic and three real network data sets demonstrate the performance of the proposed poisoning method against several well-known machine learning algorithms and a practical intrusion detection method named FMIFS-LSSVM-IDS.
  • References (20)
    20 references, page 1 of 2

    [1] M. A. Ambusaidi, X. He, P. Nanda, and Z. Tan, “Building an intrusion detection system using a filter-based feature selection algorithm,” IEEE Trans. Comput., vol. 65, no. 10, pp. 2986-2998, 2016.

    [2] K. Kishimoto, H. Yamaki, and H. Takakura, “Improving performance of anomaly-based ids by combining multiple classifiers,” in Proc. of the SAINT'11, 2011, pp. 366-371.

    [3] B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, Misleading Learners: Co-opting Your Spam Filter, ser. Machine Learning in Cyber Trust. Springer, Boston, MA, 2009.

    [4] B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, and F. Roli, “Poisoning behavioral malware clustering,” in Proc. of the AISec'14. New York, NY, USA: ACM, 2014, pp. 27-36.

    [5] W. Hu and Y. Tan, “Generating adversarial malware examples for black-box attacks based on gan,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1702.05983

    [6] M. Kloft and P. Laskov, “Online anomaly detection under adversarial impact,” in Proc. of the AISTATS'10, 2010, pp. 405-412.

    [7] B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, S. Rao, N. Taft, and J. D. Tygar, “Antidote: Understanding and defending against poisoning of anomaly detectors,” in Proc. of the IMC'09. New York, NY, USA: ACM, 2009, pp. 1-14.

    [8] M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in Proc. of the ASIACCS'06. New York, NY, USA: ACM, 2006, pp. 16-25.

    [9] W. Xu, Y. Qi, and D. Evans, “Automatically evading classifiers: A case study on pdf malware classifiers,” in Proc. of the NDSS'16, 2016, pp. 1-15.

    [10] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proc. of the ASIACCS'17. New York, NY, USA: ACM, 2017, pp. 506-519.

  • Metrics
    No metrics available
Share - Bookmark