Abstract

Artificial intelligence (AI) regulation is in vogue, with proposals around the world to regulate AI as an activity separate to other types of data processing. This article argues that this approach is problematic, given the difficulties in defining AI. It notes that the more laissez-faire approach of the United Kingdom (UK) risks subsequent hasty legislation being introduced when innovative applications of AI cause moral panic. The article proposes a way forward, utilizing the UK’s existing data protection framework to accelerate the shift to meaningful regulation. This approach leverages the substantial overlap between data protection regulation and the risks of AI and enables greater regulatory certainty and effectiveness by expanding the scope and powers of an existing regulator—the Information Commissioner’s Office—rather than creating something from scratch. Doing so mitigates the challenges of defining AI by focusing instead on the risks presented to individuals, organizations and society by all automated decision-making. Finally, the article notes that the speed of change in this area will require ongoing agility from all the bodies involved in digital regulation in the UK and outlines the potential for the Digital Regulation Cooperation Forum to support its member regulators. Keywords: artificial intelligence; data protection; innovation; technology.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.