Abstract

ABSTRACT Artificial Intelligence (AI) and Extended Reality (XR) technologies present regulators with powerful tools to manipulate human behaviour, but more perniciously, human desire and indeed human being. This is because AI applications can interfere with our internal decision-making processes, and XR applications can affect our sensations by creating and mediating our experiences of the external world. When these technologically driven affordances are strained through contemporary cognitive science research to ground the notion that perceptions are processes for prediction error minimisation, such interferences can amount to designing and engineering the regulatee herself. Effectively, regulation no longer needs to be signalled in a normative regulatory environment, nor must it hardcoded into the architecture or technologically managed. Instead, AI and XR make it possible to design and create regulatees that embody the desired regulatory outcome. Paradoxically, however, such regulatory incorporation looks very much like the exercise of the agency of an agent and therefore is not recognised as a problem by contemporary legal principles and processes which seek to push back against obviously external influences or pressures. As a result, it is immensely difficult to articulate the legal or regulatory challenges posed by these developments, and to identify the harms through these doctrinal lenses.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call