Abstract
The question of the concrete design of a fair and efficient governance framework to ensure responsible technology development and implementation concerns not only high-risk artificial intelligence systems. Everyday applications with a limited ability to inflict harm are also addressed. This article examines the European Union's approach to regulating these non-high-risk systems. We focus on the governance model for these systems established by the Artificial Intelligence Act. Based on a doctrinal legal reconstruction of the rules for codes of conduct and considering the European Union's stated goal of achieving a market-oriented balance between innovation, fundamental rights, and public interest, we explore our topic from three different perspectives: an analysis of specific regulatory components of the governance mechanism is followed by a reflection on ethics and trustworthiness implications of the EU´s approach and concluded by an analysis of a case study from an NLP-based, language-simplifying artificial intelligence application for assistive purposes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.