Powered by OpenAIRE graph
Found an issue? Give us feedback

Eticas Research & Consulting (Spain)

Eticas Research & Consulting (Spain)

15 Projects, page 1 of 3
  • Funder: European Commission Project Code: 101189689
    Overall Budget: 9,616,260 EURFunder Contribution: 8,226,280 EUR

    The rapid development and adoption of Artificial Intelligence (AI) and Machine Learning (ML) technologies have brought significant opportunities and challenges. While AI has the potential to revolutionise industries and improve lives, there are growing concerns related to privacy, security, fairness, transparency and the environmental footprint. The Olympics motto "Faster, Higher, Stronger" also applies to recent impressive AI advancements, but now is the time to update it to "Lighter, Clearer, Safer". We propose ACHILLES to build an efficient, compliant, and trustworthy AI ecosystem. At its core is an iterative development cycle inspired by clinical trials encompassing four modules. It begins with human-centric methodologies, followed by data-centric operations, model-centric strategies, and deployment-centric optimisations. It returns to human-centric approaches, focusing on explainability and model monitoring. This iterative cycle aims to enhance AI systems' performance, robustness and efficiency while ensuring they comply with the legal requirements and highest ethical standards. Another innovation is the development of an ML-driven Integrated Development Environment (IDE). The ACHILLES IDE will facilitate seamless integration between the iterative cycle's modules, enabling users to develop efficient, compliant, and trustworthy AI solutions more effectively and responsibly. The project aims to significantly impact European AI development, aligning with the region's guidelines and values. Through innovative techniques and methodologies based on the collaboration of a multidisciplinary team of 16 partners from 10 countries, ACHILLES will foster a strong AI ecosystem that respects privacy, security, and ethical principles across various sectors. By validating the results in real use cases (including healthcare, ID verification, content creation and pharmaceuticals), ACHILLES will showcase its practical applicability and potential for widespread adoption.

    more_vert
  • Funder: European Commission Project Code: 101187937
    Overall Budget: 2,999,900 EURFunder Contribution: 2,999,900 EUR

    AIOLIA gives a robust 3-tier response to the complex challenges posed by the need to operationally interpret the EU AI Act and global AI regulation. (1) Recognizing the gap between ethical values and their practical application in engineering, AIOLIA pioneers a bottom-up approach to operationalize AI ethics with regard to human condition and behaviour. Following a selection of real-world use cases, AIOLIA translates high-level principles into actionable and contextual guidelines co-created by leading academic, policy, and ethics-aware industrial partners who represent diverse professional and geographic European and international contexts. (2) AIOLIA's commitment to context-sensitivity is deepened by crafting modular, inclusive training materials following the ADDIE methodology designed to cater to diverse learning needs. Hosted on the Embassy of Good Science, AIOLIA materials will range from lectures, videos, and mock reviews to such innovative formats as podcasts, Tiktoks, and a chatbot teaching AI ethics. (3) AIOLIA's outreach is amplified by encompassing 7 research ethics and integrity networks and 3 prominent computer science networks. This strategic alignment enables us to effectively recruit training participants and disseminate human-centric ethics guidelines to a wide spectrum of stakeholders, from ethics experts to early-stage researchers and policymakers worldwide. Resolutely European, AIOLIA's vision propagates beyond EU, embracing global cooperation with leading universities and think tanks in China, South Korea, Japan, and Canada. Utilizing UNESCO platform with its reach to Africa and South Asia, AIOLIA’s guidelines evolve into an analytic toolbox for key international AI dialogues and processes. This global perspective ensures that AIOLIA's impact is not only significant but also sustainable, contributing to fair scientific cooperation and providing concrete and culturally informed ethics instruments to shape the next generation of AI systems.

    more_vert
  • Funder: European Commission Project Code: 101070212
    Overall Budget: 3,341,640 EURFunder Contribution: 3,341,640 EUR

    FINDHR is an interdisciplinary project that seeks to prevent, detect, and mitigate discrimination in AI. Our research will be contextualized within the technical, legal, and ethical problems of algorithmic hiring and the domain of human resources, but will also show how to manage discrimination risks in a broad class of applications involving human recommendation. Through a context-sensitive, interdisciplinary approach, we will develop new technologies to measure discrimination risks, to create fairness-aware rankings and interventions, and to provide multi-stakeholder actionable interpretability. We will produce new technical guidance to perform impact assessment and algorithmic auditing, a protocol for equality monitoring, and a guide for fairness-aware AI software development. We will also design and deliver specialized skills training for developers and auditors of AI systems. We ground our project in EU regulation and policy. As tackling discrimination risks in AI requires processing sensitive data, we will perform a targeted legal analysis of tensions between data protection regulation (including the GDPR) and anti-discrimination regulation in Europe. We will engage with underrepresented groups through multiple mechanisms including consultation with experts and participatory action research. In our research, technology, law, and ethics are interwoven. The consortium includes leaders in algorithmic fairness and explainability research (UPF, UVA, UNIPI, MPI-SP), pioneers in the auditing of digital services (AW, ETICAS), and two industry partners that are leaders in their respective markets (ADE, RAND), complemented by experts in technology regulation (RU) and cross-cultural digital ethics (EUR), as well as worker representatives (ETUC) and two NGOs dedicated to fighting discrimination against women (WIDE+) and vulnerable populations (PRAK). All outputs will be released as open access publications, open source software, open datasets, and open courseware.

    more_vert
  • Funder: European Commission Project Code: 101021607
    Overall Budget: 6,994,810 EURFunder Contribution: 6,994,810 EUR

    In order to support the fight against radicalization and thus prevent future terrorist attacks from taking place, the CounteR project will bring data from disperse sources into an analysis and early alert platform for data mining and prediction of critical areas (e.g. communities), aiming to be a frontline community policing tool which looks at the community and its related risk factors rather than targeting and surveilling individuals. This is a key point in ensuring the privacy of citizens and the protection of their personal data, an issue that has been of great concern to policymakers and LEAs alike, who must balance the important work they do with the need to protect innocent individuals. The system will incorporate state of the art NLP technologies combined with expert knowledge into the psychology of radicalization processes to provide a complete solution for LEAs to understand the when, where and why of radicalization in the community to help combat propaganda, fundraising, recruitment and mobilization, networking, information sharing, planning/coordination, data manipulation and misinformation. Information gained by the system will also allow LEAs and other community stakeholders to implement prevention programs and employ counternarratives rather than relying solely on surveillance. The CounteR solution will cover a wide range of information sources, both dynamic (e.g. social media) and offline (e.g. open data sources) and combined with world-renowned expertise in radicalization processes and their psychology. The CounteR solution will allow LEAs to take coordinated action in real-time while also preserving the privacy of citizens, as the system will target “hotspots” of radicalization rather than individuals. In addition, the CounteR solution will support information sharing between European LEAs and foster collaboration between diverse agencies by providing an open platform which prioritizes harmonized information formats.

    more_vert
  • Funder: European Commission Project Code: 101178061
    Overall Budget: 3,000,000 EURFunder Contribution: 3,000,000 EUR

    TWIN4DEM brings together scholars of the social sciences and humanities, computational social sciences (CSS) and democracy stakeholders to jointly address one of the most pressing contemporary issues: what causes democracies to backslide? Combining various advanced CSS methods, TWIN4DEM prototypes the first ever digital twins of four European democratic systems (Czechia, France, Hungary and the Netherlands). In doing so, TWIN4DEM delivers four major breakthroughs. First TWIN4DEM develops a new agent-based conceptual model allowing to identify the causal pathways leading to executive aggrandisementn - the excessive concentration of powers into national executives - and threatening rule-of-law institutions. This will allow the systematic identification and testing of new hypotheses on the multidimensional causes of democratic backslide. Second, TWIN4DEM releases new cross-cutting tools to process and aggregate textual and non-textual data more efficiently and in real-time in an open, FAIR and GDPR-compliant manner. TWIN4DEM tools will not only allow democracy researchers to process more effectively the abundance of data on political life but also to enhance the transparency and legitimacy of democratic decision-making. Third, TWIN4DEM simulates, together with national policy makers and civil society organizations, policy scenarios to prevent and react against democratic backslide. This will enhance the effectiveness of interventions aiming at shielding rule-of-law institutions against external and internal threats. As a result, European democracies will be more resilient. Fourth, by formulating guidelines on scaling up the use of CSS in democracy research in a participatory, open and ethics-driven manner, TWIN4DEM paves the way for using such methods in a way that empowers citizens and reinvigorates the quality of democratic governance.

    more_vert
  • chevron_left
  • 1
  • 2
  • 3
  • chevron_right

Do the share buttons not appear? Please make sure, any blocking addon is disabled, and then reload the page.

Content report
No reports available
Funder report
No option selected
arrow_drop_down

Do you wish to download a CSV file? Note that this process may take a while.

There was an error in csv downloading. Please try again later.