
This paper presents the Cybersecurity Psychology Framework (CPF), a novel taxonomy designed to categorize and quantify human-centric vulnerabilities in security operations. The core contribution of this work is two-fold: first, we operationalize the CPF by mapping its subcategories to specific, measurable indicators derived from standard SOC tooling (e.g., Splunk, Elasticsearch, Qualys) and communication platforms (e.g., Slack, Teams), formalizing these measures through algorithmic definitions. Second, we propose and detail a lightweight, efficient architecture for a Large Language Model (LLM) that leverages Retrieval-Augmented Generation (RAG) and targeted fine-tuning on a compact, domain specific corpus. This architecture is designed to analyze the structured and unstructureddata defined by the CPF algorithms to identify latent psychological risks. We argue that this approach makes sophisticated behavioral analysis computationally feasible and accessible, moving beyond theoretical taxonomy to provide a practical tool for proactive risk mitigation. The paper concludes with a methodology for validating the framework and its LLM component in a real-world environment.
| citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
