Abstract

The answers each political community finds to the law-reform questions posed by artificial intelligence (AI) may differ, but a near-term threat is that AI systems capable of causing harm will not be confined to one jurisdiction; indeed, it may be impossible to link them to a specific jurisdiction at all. This is not a new problem in cybersecurity, though different national approaches to regulation will pose barriers to effective regulation exacerbated by the speed, autonomy and opacity of AI systems. For that reason, some measure of collective action is needed. Lessons may be learned from efforts to regulate the global commons, as well as moves to outlaw certain products (weapons and drugs, for example) and activities (such as slavery and child sex tourism). The argument advanced here is that regulation, in the sense of public control, requires the active involvement of States. To coordinate those activities and enforce global ‘red lines’, this paper posits a hypothetical International Artificial Intelligence Agency, modelled on the agency created after the Second World War to promote peaceful uses of nuclear energy, while deterring or containing its weaponisation and other harmful effects.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call