Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Dataset . 2026
License: CC BY
Data sources: ZENODO
ZENODO
Dataset . 2026
License: CC BY
Data sources: Datacite
addClaim

Privacy and Security Risks in AI and Generative AI Systems: An Empirical Study of Practitioner Perceptions and Mitigation Practices in Brazil

Privacy and Security Risks in AI and Generative AI Systems: An Empirical Study of Practitioner Perceptions and Mitigation Practices in Brazil

Abstract

 Context: The widespread adoption of Artificial Intelligence (AI) and, more recently, Generative AI (GenAI) systems has intensified privacy and security concerns in organizational settings. Beyond technical vulnerabilities, risks increasingly emerge from socio-technical factors involving user behavior, organizational governance, regulatory uncertainty, and the evolving threat landscape enabled by AI-based attacks. Goal: This study aims to investigate privacy and security risks associated with the use of AI and GenAI systems, examining how these risks are perceived by practitioners, which mitigation strategies are adopted in practice, and what challenges limit their effectiveness in real-world organizational contexts. Method: We adopted a mixed-method research design combining a literature review with an empirical survey of 101 IT professionals working primarily in the Brazilian public sector. Quantitative data were analyzed using descriptive statistics to assess perceived risks, mitigation strategies, and challenges, while qualitative data from open-ended questions were analyzed using inductive coding to identify recurring themes and contextual factors. Results: The results show that practitioners perceive privacy and security risks as highly critical, with data leakage, legal non-compliance, lack of transparency, prompt injection attacks, and malicious misuse of AI-generated content being rated as important or very important by most respondents. Although mitigation strategies such as avoiding sensitive data, anonymization, organizational policies, training, and technical controls are widely adopted, their effectiveness is perceived as limited. Qualitative findings reveal that risks are strongly shaped by governance gaps, low organizational maturity, insufficient training, shadow AI usage, lack of vendor transparency, and increasing AI-enabled cyber threats. Conclusions : The findings indicate a significant gap between the adoption of mitigation strategies and practitioners’ confidence in their effectiveness. Privacy and security risks in AI and GenAI systems are inherently socio-technical and cannot be adequately addressed through technical controls alone. Effective risk management requires integrated approaches that combine technical safeguards with organizational governance, regulatory alignment, training, and cultural change to support trustworthy AI adoption.\end{abstract}

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average