Abstract

Artificial Intelligence (AI) poses interwoven challenges. Defined as technology that uses advanced computation to perform at human cognitive capacity in some task area, AI must be regulated in the context of its broader toolbox - algorithms, data and platforms - and its regulation must be sector-specific. Establishing national and community priorities on how to reap AI’s benefits, while managing its social and economic risks, is an evolving debate. Digital Platform Firms are a fundamental driver of AI tools: they dominate the playing field and often pursue priorities outside the frames of the public sector and of civil society. While its governance is critical to national success, AI pries open a Pandora's box of questions that sweep across the economy and society, engaging diverse communities. Rather than a single, global ethical framework, one must consider how to pursue objectives of interoperability amongst nations with quite different political economies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call