
Context: The widespread adoption of Artificial Intelligence (AI) and, more recently, Generative AI (GenAI) systems has intensified privacy and security concerns in organizational settings. Beyond technical vulnerabilities, risks increasingly emerge from socio-technical factors involving user behavior, organizational governance, regulatory uncertainty, and the evolving threat landscape enabled by AI-based attacks. Goal: This study aims to investigate privacy and security risks associated with the use of AI and GenAI systems, examining how these risks are perceived by practitioners, which mitigation strategies are adopted in practice, and what challenges limit their effectiveness in real-world organizational contexts. Method: We adopted a mixed-method research design combining a literature review with an empirical survey of 101 IT professionals working primarily in the Brazilian public sector. Quantitative data were analyzed using descriptive statistics to assess perceived risks, mitigation strategies, and challenges, while qualitative data from open-ended questions were analyzed using inductive coding to identify recurring themes and contextual factors. Results: The results show that practitioners perceive privacy and security risks as highly critical, with data leakage, legal non-compliance, lack of transparency, prompt injection attacks, and malicious misuse of AI-generated content being rated as important or very important by most respondents. Although mitigation strategies such as avoiding sensitive data, anonymization, organizational policies, training, and technical controls are widely adopted, their effectiveness is perceived as limited. Qualitative findings reveal that risks are strongly shaped by governance gaps, low organizational maturity, insufficient training, shadow AI usage, lack of vendor transparency, and increasing AI-enabled cyber threats. Conclusions : The findings indicate a significant gap between the adoption of mitigation strategies and practitioners’ confidence in their effectiveness. Privacy and security risks in AI and GenAI systems are inherently socio-technical and cannot be adequately addressed through technical controls alone. Effective risk management requires integrated approaches that combine technical safeguards with organizational governance, regulatory alignment, training, and cultural change to support trustworthy AI adoption.\end{abstract}
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 0 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
