Abstract

The paper explores the role of transnational private regulation like certification in addressing human rights challenges posed by artificial intelligence (AI). It first explores the human rights challenges AI may pose. It analyses to which extent and under which conditions transnational private regulation may be effective in addressing human rights risks in general. It then turns to the role of transnational private regulation in connection with AI. It emerges compliance with human rights should partly be embedded in AI and partly be shaped through external risk management when developing and deploying AI. Transnational private regulation may play a role in both. For example, it may indicate which aspects of human rights compliance should be embedded (and provide tools to realize this) and which aspects should be implemented through external risk management. In connection with the latter it is observed human rights due diligence as incorporated in the United Nations Guiding Principles on Business Human Rights (UNGP) or OECD Guidelines for Multinational Enterprises may provide guidance to support external risk management.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.