Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Other literature type . 2024
License: CC BY
Data sources: ZENODO
https://doi.org/10.2307/jj.351...
Part of book or chapter of book . 2024 . Peer-reviewed
Data sources: Crossref
ZENODO
Other literature type . 2024
License: CC BY
Data sources: Datacite
ZENODO
Other literature type . 2024
License: CC BY
Data sources: Datacite
versions View all 3 versions
addClaim

TRANSPARENCIA Y RESPONSABILIDAD EN EL USO DE LA INTELIGENCIA ARTIFICIAL PARA LA GESTIÓN MIGRATORIA

Authors: Banchio, Pablo;

TRANSPARENCIA Y RESPONSABILIDAD EN EL USO DE LA INTELIGENCIA ARTIFICIAL PARA LA GESTIÓN MIGRATORIA

Abstract

El uso de la inteligencia artificial (IA) en la gestión de la migración y el control de fronteras presenta desafíos y oportunidades significativos para la protección de los derechos humanos. La precisión y confiabilidad de las decisiones algorítmicas dependen en gran medida de la calidad de los datos utilizados, lo que exige especial atención para evitar sesgos y discriminaciones. La falta de transparencia en los procesos de toma de decisiones algorítmicas, conocida como "barrera de la caja negra", dificulta la comprensión y el control de las mismas, lo que aumenta el riesgo de violaciones de derechos humanos. Es fundamental establecer mecanismos de rendición de cuentas y responsabilidad efectivos para garantizar que los sistemas de IA se utilicen de manera responsable y ética. Esto incluye proporcionar vías de impugnación para las personas afectadas por decisiones algorítmicas y establecer marcos regulatorios claros para el desarrollo y la implementación de sistemas de IA en el ámbito migratorio. La transparencia en los procesos de toma de decisiones algorítmicas es crucial para prevenir abusos y discriminaciones. Se deben implementar medidas para garantizar que los algoritmos sean auditables y explicables, permitiendo el escrutinio público y la identificación de posibles sesgos o errores. En la actualidad, no existe un marco legal integral que regule el uso de la IA en la gestión de la migración. Es necesario desarrollar un marco legal sólido que establezca principios claros para el desarrollo, implementación y uso de sistemas de IA en este ámbito, garantizando el respeto de los derechos humanos y las libertades fundamentales de las personas migrantes. El legislador europeo debe asumir un rol proactivo en la regulación del uso de la IA en la gestión migratoria, adoptando las medidas legislativas y reglamentarias necesarias para garantizar la protección de los derechos humanos y promover una migración justa y segura. TRANSPARENCY AND ACCOUNTABILITY IN THE USE OF ARTIFICIAL INTELLIGENCE FOR MIGRATION MANAGEMENT The use of artificial intelligence (AI) in migration management and border control presents significant challenges and opportunities for human rights protection. The accuracy and reliability of algorithmic decisions heavily depend on the quality of the data used, which demands special attention to avoid biases and discrimination. The lack of transparency in algorithmic decision-making processes, known as the "black box barrier," hinders the understanding and control of these decisions, increasing the risk of human rights violations. Establishing effective accountability and responsibility mechanisms is crucial to ensure that AI systems are used responsibly and ethically. This includes providing avenues for redress for individuals affected by algorithmic decisions and establishing clear regulatory frameworks for the development and implementation of AI systems in the migration domain. Transparency in algorithmic decision-making processes is paramount to preventing abuses and discrimination. Measures should be implemented to ensure that algorithms are auditable and explainable, allowing public scrutiny and identification of potential biases or errors. Currently, there is no comprehensive legal framework governing the use of AI in migration management. A robust legal framework needs to be developed that establishes clear principles for the development, implementation, and use of AI systems in this domain, ensuring the respect of human rights and the fundamental freedoms of migrants. European policymakers must take a proactive role in regulating the use of AI in migration management, adopting the necessary legislative and regulatory measures to guarantee the protection of human rights and promote fair and safe migration.

Related Organizations
Keywords

Artificial intelligence, Human migrations, International protection of human rights, Human rights

  • BIP!
    Impact byBIP!
    selected citations
    These citations are derived from selected sources.
    This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    0
    popularity
    This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
    Average
    influence
    This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
    Average
    impulse
    This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
    Average
Powered by OpenAIRE graph
Found an issue? Give us feedback
selected citations
These citations are derived from selected sources.
This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Citations provided by BIP!
popularity
This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.
BIP!Popularity provided by BIP!
influence
This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).
BIP!Influence provided by BIP!
impulse
This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.
BIP!Impulse provided by BIP!
0
Average
Average
Average
Related to Research communities