
The wide adoption of supervised‐learning–based network intrusion detection systems is hindered by their reliance on labelled traffic, which is costly to obtain in real-world scenarios. In this work, five unsupervised domain adaptation methods are evaluated in an attempt to alleviate this problem. First, the severity of source-target divergence is quantified on two benchmark datasets(CICIDS2018 and UNSW-NB15 netflow datasets) using Maximum Mean Discrepancy, per-feature Wasserstein distances, and Kolmogorov–Smirnov tests. Next, a common neural‐network backbone is trained on the source, and each DA method is used to attempt to bridge the source - target domain gap. Finally, SHAP is used to compare feature‐importance patterns before and after adaptation, to assess the DA effect on model decision logic. The results demonstrate that conventional DA methods fail to deliver robust cross‐domain NIDS. --- Disclaimer: This is a preprint version of the article. The content here is for view-only purposes. This is not the final published version and may differ from the version of record. Please refer to the official version for citation and authoritative use.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
