Abstract
AbstractEfforts to set standards for artificial intelligence (AI) reveal striking patterns: technical experts hailing from geopolitical rivals, such as the United States and China, readily collaborate on technical AI standards within transnational standard‐setting organizations, whereas governments are much less willing to collaborate on global ethical AI standards within international organizations. Whether competition or cooperation prevails can be explained by three variables: the actors that make up the membership of the standard‐setting organization, the issues on which the organization's standard‐setting efforts focus, and the “games” actors play when trying to set standards within a particular type of organization. A preliminary empirical analysis provides support for the contention that actors, issues, and games affect the prospects for cooperation on global AI standards. It matters because shared standards are vital for achieving truly global frameworks for the governance of AI. Such global frameworks, in turn, lower transaction costs and the probability that the world will witness the emergence of AI systems that threaten human rights and fundamental freedoms.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.