Abstract

Abstract: We must reduce harms from current AI systems while also looking ahead to harms that may occur soon. Experts worry that runaway AI could cause extreme harm in the next five to twenty years. The risk is that we develop superhuman AI systems that surpass humans in domains like persuasion, strategy, hacking, and research and development; that we design these systems to pursue goals autonomously; that we accidentally give them unintended goals; and that humans lose control of these superhuman systems. Without regulation, the actions of a small number of elite AI developers could pose massive risks to the rest of society. The risk is not specific to any particular deployment context, but is inherent to the technology itself. So, in addition to regulating specific AI products, we should also regulate the development of frontier AI systems. We should develop safety standards and empower a regulatory authority to enforce them. These regulations would apply only to a small number of frontier AI developers. The risk from runaway AI could emerge very suddenly, especially if advanced AI itself has accelerated the pace of AI progress. If we wait to see the problem before responding, the regulations may come into force too late. So we should regulate proactively, requiring a government license for frontier AI developers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call