Abstract

While generative AI shares some similarities with previous technological breakthroughs, it also raises unique challenges for containing social and economic harms. State approaches to AI governance vary; some lay a foundation for transnational governance whereas others do not. We consider some technical dimensions of AI safety in both open and closed systems, as well as the ideas that are presently percolating to safeguard their future development. Examining initiatives for the global community and for the coalition of open societies, we argue for building a dual-track interactive strategy for containing AI's potentially nightmarish unintended consequences. We conclude that AI safety is AI governance, which means that pluralist efforts to bridge gaps between theory and practice and the STEM–humanities divide are critical for democratic sustainability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call