Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Report
Data sources: ZENODO
addClaim

Characterizing AI-Enabled Social Engineering Threats to U.S. National Security: A Mixed-Methods Analysis for Developing Defensive Policy and Technical Countermeasures

Authors: Pokorny, Laszlo;

Characterizing AI-Enabled Social Engineering Threats to U.S. National Security: A Mixed-Methods Analysis for Developing Defensive Policy and Technical Countermeasures

Abstract

The proliferation of advanced artificial intelligence (AI), particularly large language models (LLMs) and deepfake synthesis technologies, has fundamentally transformed the cybersecurity threat landscape, enabling malicious actors to automate and scale highly sophisticated social engineering attacks against U.S. national security interests. This mixed-methods dissertation systematically characterized AI-enabled social engineering threats and evaluated both technical countermeasures and policy preparedness. The quantitative phase employed machine learning classification on two large-scale phishing email datasets (N = 18,650 and N = 871,590) to distinguish AI-generated phishing from legitimate communications and identify predictive features. Three classification approaches were evaluated: Random Forest with 20 engineered linguistic and structural features, TF-IDF with Logistic Regression, and TF-IDF with Random Forest. Results demonstrated that the TF-IDF Logistic Regression model achieved the highest performance with 92% accuracy and an AUC-ROC of 0.9869, supporting H1 that text-based representations outperform engineered feature approaches. Contrary to expectations, H2 was not supported as structural features (e.g., exclamation count, special character ratio) demonstrated greater predictive importance than linguistic features, suggesting that AI-generated phishing may exhibit distinctive formatting patterns rather than semantic indicators. The qualitative phase analyzed NIST Cybersecurity Framework 2.0, NIST SP 800-53, and CISA guidance documents using thematic content analysis, identifying five critical policy gaps: absence of operational AI-specific detection frameworks, insufficient training standards for AI-enabled social engineering recognition, lack of deepfake voice vishing countermeasures, limited guidance on AI-generated content authentication, and inadequate cross-sector coordination mechanisms for emerging AI threats. Integration of findings revealed a significant disconnect between demonstrated technical detection capabilities and current policy frameworks, with existing guidance inadequately addressing the unique characteristics of AI-enabled attacks. Implications for cybersecurity practitioners, policymakers, and national security stakeholders include recommendations for enhanced detection architectures, updated training curricula incorporating AI threat awareness, and policy revisions addressing the rapidly evolving AI-enabled threat environment.

Powered by OpenAIRE graph
Found an issue? Give us feedback