
Cyber threats are becoming increasingly complex, causingtraditional security systems to struggle in keeping up and highlightingthe need for advanced solutions. Large Language Models (LLMs), suchas OpenAI’s ChatGPT and Meta AI’s LLaMA, have shown great potentialto transform cybersecurity workflows with their abilities in naturallanguage understanding, pattern recognition, and automated reasoning.These models are particularly promising for tasks like networkmonitoring, threat detection, and security alert triage. However, challengesrelated to the reliability of outputs, adversarial risks, and ethicalconcerns must be addressed. This paper presents a comprehensive surveyof LLM-based approaches for security testing and evaluates threeopen-access LLMs, including Mistral-7B, Qwen3-8B, and Llama3.1-8B,demonstrating their ability to enhance security alert analysis. Our findingssuggest that LLMs can improve alert clarity and usability, makingthem more accessible to non-experts while providing valuable insightsfor developers.
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
