Abstract

Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-community collaboration on AI policy. We propose to make use of the principle of an ‘incompletely theorized agreement’ to bridge some underlying disagreements, in the name of important cooperation on addressing AI’s urgent challenges. We propose that on certain issue areas, scholars working with near-term and long-term perspectives can converge and cooperate on selected mutually beneficial AI policy projects, while maintaining their distinct perspectives.

Highlights

  • The prevailing uncertainty around the trajectory and impact of artificial intelligence (AI) makes it clear that appropriate technology policy approaches are urgently needed

  • There is at present an experienced fragmentation in the responsible AI policy community, amongst clusters of scholars focusing on ‘nearterm’ AI risks, and those focusing on ‘longer-term’ risks

  • This paper has sought to map the practical space for intercommunity collaboration, with a view towards the practical development of AI policy

Read more

Summary

Introduction

The prevailing uncertainty around the trajectory and impact of artificial intelligence (AI) makes it clear that appropriate technology policy approaches are urgently needed. Some forecast that continuing progress in AI capabilities will eventually make AI systems a ‘general-purpose technology’ [1], or may even enable the development of ‘high-level machine intelligence’ (HLMI) [2] or other ‘transformative’ capabilities [3, 4] Debate on these latter scenarios is diverse, and. The paper suggests that differences may be overstated, and proposes that even if one assume such differences, these are not practically insurmountable Rather, it argues that the principle of an ‘incompletely theorized agreement’, originally derived from constitutional law, provides both philosophical foundations and historical precedent for a form of cooperation between divergent communities that enables progress on urgent shared issues, without compromising on their respective goals. For the purpose of this paper, our focus is on this specific gap

AI policy: a house divided?
The pitfalls of fragmented efforts in AI Policy
Recombinant DNA
Ballistic missile defense arms control
Examining potential grounds for divergence
Towards ‘incompletely theorized agreements’ for AI policy
Incompletely theorized agreements in AI policy: examples and sketches
Limitations of incompletely theorized agreements
Conclusion
Compliance with ethical standards
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call