M. A. Ambusaidi, X. He, P. Nanda, and Z. Tan, “Building an intrusion detection system using a filter-based feature selection algorithm,” IEEE Trans. Comput., vol. 65, no. 10, pp. 2986-2998, 2016.
 K. Kishimoto, H. Yamaki, and H. Takakura, “Improving performance of anomaly-based ids by combining multiple classifiers,” in Proc. of the SAINT'11, 2011, pp. 366-371.
 B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. P. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, Misleading Learners: Co-opting Your Spam Filter, ser. Machine Learning in Cyber Trust. Springer, Boston, MA, 2009.
 B. Biggio, K. Rieck, D. Ariu, C. Wressnegger, I. Corona, G. Giacinto, and F. Roli, “Poisoning behavioral malware clustering,” in Proc. of the AISec'14. New York, NY, USA: ACM, 2014, pp. 27-36.
 W. Hu and Y. Tan, “Generating adversarial malware examples for black-box attacks based on gan,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1702.05983
 M. Kloft and P. Laskov, “Online anomaly detection under adversarial impact,” in Proc. of the AISTATS'10, 2010, pp. 405-412.
 B. I. Rubinstein, B. Nelson, L. Huang, A. D. Joseph, S.-h. Lau, S. Rao, N. Taft, and J. D. Tygar, “Antidote: Understanding and defending against poisoning of anomaly detectors,” in Proc. of the IMC'09. New York, NY, USA: ACM, 2009, pp. 1-14.
 M. Barreno, B. Nelson, R. Sears, A. D. Joseph, and J. D. Tygar, “Can machine learning be secure?” in Proc. of the ASIACCS'06. New York, NY, USA: ACM, 2006, pp. 16-25.
 W. Xu, Y. Qi, and D. Evans, “Automatically evading classifiers: A case study on pdf malware classifiers,” in Proc. of the NDSS'16, 2016, pp. 1-15.
 N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against machine learning,” in Proc. of the ASIACCS'17. New York, NY, USA: ACM, 2017, pp. 506-519.
 S. M. Moosavidezfooli, A. Fawzi, and P. Frossard, “Deepfool: A simple and accurate method to fool deep neural networks,” in Proc. of the CVPR'16, 2016, pp. 2574-2582.
 B. Biggio, B. Nelson, and P. Laskov, “Poisoning attacks against support vector machines,” in Proc. of the ICML'12, 2012, pp. 1467-1474.
 C. Yang, Q. Wu, H. Li, and Y. Chen, “Generative poisoning attack method against neural networks,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1703.01340
 M. Zhao, B. An, W. Gao, and T. Zhang, “Efficient label contamination attacks against black-box learning models,” in Proc. of the IJCAI'17, 2017, pp. 3945-3951.
 I. Rosenberg, A. Shabtai, L. Rokach, and Y. Elovici, “Generic black-box end-to-end attack against rnns and other api calls based malware classifiers,” arXiv.org, 2017. [Online]. Available: https://arxiv.org/abs/1707.05970 [OpenAIRE]