Abstract

AbstractThe extensive and frequently severe impact of AI systems on society cannot be fully addressed by the human rights legal framework. Many issues involve community choices or individual autonomy requiring a contextual analysis focused on societal and ethical values. The social and ethical consequences of AI represent a complementary dimension, alongside that of human rights, that must be properly investigated in AI assessment, to capture the holistic dimension of the relationship between humans and machines. This assessment is more complicated than that of human rights, as it involves a variety of theoretical inputs on the underlying values, as well as a proliferation of guidelines. This requires a contextualised and, as far as possible, a participative analysis of the values of the community in which the AI solutions are expected to be implemented. Here the experts play a crucial role in detecting, contextualising and evaluating the AI solutions against existing ethical and social values. Ethics committees in scientific research, bioethics and clinical trials, as well as corporate AI ethics boards, can provide inputs for future AI expert committees within the HRESIA model. Based on the experience of these committees, the assessment cannot be entrusted entirely to experts, but it should also include a participatory dimension, which is essential to effective democratic decision-making process concerning AI.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.