Downloads provided by UsageCounts
Following on from the publication of its Feasibility Study in December 2020, the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (and its subgroups) initiated efforts to formulate and draft its Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy, and the rule of law. This document was ultimately adopted by the CAHAI plenary in December 2021. To support this effort, The Alan Turing Institute undertook a programme of research that explored the governance processes and practical tools needed to operationalise the integration of human right due diligence with the assurance of trustworthy AI innovation practices. The resulting output, Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems: A proposal, was completed and submitted to the Council of Europe in September 2021. It presents an end-to-end approach to the assurance of AI project lifecycles that integrates context-based risk analysis and appropriate stakeholder engagement with comprehensive impact assessment, and transparent risk management, impact mitigation, and innovation assurance practices. Taken together, these interlocking processes constitute a Human Rights, Democracy and the Rule of Law Assurance Framework (HUDERAF). The HUDERAF combines the procedural requirements for principles-based human rights due diligence with the governance mechanisms needed to set up technical and socio-technical guardrails for responsible and trustworthy AI innovation practices. Its purpose is to provide an accessible and user-friendly set of mechanisms for facilitating compliance with a binding legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy, and the rule of law, and to ensure that AI innovation projects are carried out with appropriate levels of public accountability, transparency, and democratic governance.
AI assurance, multi-stakeholder engagement, AI ethics, stakeholder analysis, Artificial Intelligence, trustworthy AI, algorithmic impact assessment, human rights impact assessment, Council of Europe, human rights due diligence, AI governance
AI assurance, multi-stakeholder engagement, AI ethics, stakeholder analysis, Artificial Intelligence, trustworthy AI, algorithmic impact assessment, human rights impact assessment, Council of Europe, human rights due diligence, AI governance
| selected citations These citations are derived from selected sources. This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | 2 | |
| popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network. | Average | |
| influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically). | Average | |
| impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network. | Average |
| views | 179 | |
| downloads | 91 |

Views provided by UsageCounts
Downloads provided by UsageCounts