<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
Abstract. A wide adoption of Artificial Intelligence (AI) can be observed in recent years over networking to provide zero-touch, full autonomy of services towards the next generation Beyond 5G (B5G)/6G. However, AI-driven attacks on these services are a major concern in reaching the full potential of this future vision. Identifying how resilient the AI models are against attacks is an important aspect that should be carefully evaluated before adopting these services that could impact the privacy and security of billions of people. Therefore, we intend to evaluate resilience on Machine Learning (ML)-based use case of network traffic classification and attacks on it during model training and testing stages. For this we use multiple resilience metrics. Furthermore, we investigate a novel approach using Explainable AI (XAI) to detect network classification-related attacks. Our experiments indicate that attacks can clearly affect the model integrity, which is measurable with the metric and detectable with XAI
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 3 | |
popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Top 10% | |
influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |