

You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
<script type="text/javascript">
<!--
document.write('<div id="oa_widget"></div>');
document.write('<script type="text/javascript" src="https://www.openaire.eu/index.php?option=com_openaire&view=widget&format=raw&projectId=undefined&type=result"></script>');
-->
</script>
You have already added 0 works in your ORCID record related to the merged Research product.
You have already added 0 works in your ORCID record related to the merged Research product.
Open Requirements Modelling for Compliance and Conformity of Trustworthy AI
Open Requirements Modelling for Compliance and Conformity of Trustworthy AI
The many initiatives on trustworthy AI result in a confusing and multipolar landscape that organizations are operating within the fluid and complex international value chains must navigate in pursuing trustworthiness AI. The EU’s proposed Draft AI Act will now shift the focus of such organizations toward the normative requirements for regulatory compliance. Understanding the degree to which standards compliance will deliver regulatory compliance for AI remains a complex challenge. This paper offers a simple and repeatable mechanism for extracting and sharing the terms and concepts relevant to normative statements in the legal and standards texts into open knowledge graphs. This representation is used to assess the adequacy of standards conformance to regulatory compliance and thereby provide a basis for identifying areas where further technical consensus development in trustworthy AI value chains will be required to achieve regulatory compliance.
Trustworthy AI, AI Act, regulation, standards, SC42, semantic web
Trustworthy AI, AI Act, regulation, standards, SC42, semantic web
14 references, page 1 of 2
A. Jobin, M. Ienca, and E. Vayena,, “The global landscape of AI ethics guidelines”. Nat Mach Intell 1, 389-399 (2019). DOI: 10.1038/s42256-019-0088-2 [OpenAIRE]
Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial intelligence (Artificial Intelligence Act) and amending certain union legislative actions, COM(2021) 206 final, Brussels, 21.4.2021
3. HLEG (2019) European Commission's High Level Expert Group, “Ethics Guidelines for Trustworthy AI”, April 2019, https://ec.europa.eu/futurium/en/ai-allianceconsultation/guidelines
4. ISO/IEC JTC 1 (2022) ISO/IEC JTC 1 Information Technology, December 2022. https://www.iso.org/committee/45020.html.
5. AI Watch: AI Standardization Landscape (v. 2.0) state of play and link to the EC proposal for an AI regulatory framework. (2021)
6. Microsoft Responsible AI Standard, v2 GENERAL REQUIREMENTS, June 2022, https://blogs.microsoft.com/wpcontent/uploads/prod/sites/5/2022/06/MicrosoftResponsible-AI-Standard-v2-General-Requirements-3.pdf
7. H. J. Pandit, D. O'Sullivan, and D. Lewis, “Queryable Provenance Metadata For GDPR Compliance”, in Procedia Computer Science, ser. Proceedings of the 14th International Conference on Semantic Systems 10th - 13th of September 2018 Vienna, Austria, vol. 137, Jan. 1, 2018, pp. 262-268. DOI: 10/gfdc6r. Available: http:/ /www.sciencedirect.com/science/article/pii/S18770509183 16314 (visited on 16/12/2022).
8. G. Delaram, H. J. Pandit, D. Lewis, “AIRO: an Ontology for Representing AI Risks based on the Proposed EU AI Act and ISO Risk Management Standards”, International Conference on Semantic Systems (SEMANTiCS), Vienna, Austria, 13-15 September 2022, 2022
9. RDF 1.1 Primer,. Available: https://www.w3.org/TR/rdf11- primer/ (visited on 16/12/2022).
10. OWL 2 Web Ontology Language Document Overview (Second Edition), W3C Recommendation 11 December 2012, http://www.w3.org/TR/owl2-overview/
citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).0 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average visibility views 81 download downloads 70 citations This is an alternative to the "Influence" indicator, which also reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).0 popularity This indicator reflects the "current" impact/attention (the "hype") of an article in the research community at large, based on the underlying citation network.Average influence This indicator reflects the overall/total impact of an article in the research community at large, based on the underlying citation network (diachronically).Average impulse This indicator reflects the initial momentum of an article directly after its publication, based on the underlying citation network.Average Powered byBIP!
- 81views70downloads



- Funder: Science Foundation Ireland (SFI)
- Project Code: 13/RC/2106
- Funding stream: SFI Research Centres ; SFI Research Centres Programme | Phase 1
- Funder: European Commission (EC)
- Project Code: 813497
- Funding stream: H2020 | MSCA-ITN-ETN
The many initiatives on trustworthy AI result in a confusing and multipolar landscape that organizations are operating within the fluid and complex international value chains must navigate in pursuing trustworthiness AI. The EU’s proposed Draft AI Act will now shift the focus of such organizations toward the normative requirements for regulatory compliance. Understanding the degree to which standards compliance will deliver regulatory compliance for AI remains a complex challenge. This paper offers a simple and repeatable mechanism for extracting and sharing the terms and concepts relevant to normative statements in the legal and standards texts into open knowledge graphs. This representation is used to assess the adequacy of standards conformance to regulatory compliance and thereby provide a basis for identifying areas where further technical consensus development in trustworthy AI value chains will be required to achieve regulatory compliance.