Extended Reality (XR) platforms can expose users to novel attacks including embodied abuse and/or AI attacks-at-scale. The expanded attack surfaces of XR technologies may expose users of shared online platforms to psychological/social and physiological harms via embodied interactions with potentially millions of other humans or artificial humans, causing what we define as an inter-reality attack. The past twenty years have demonstrated how social and other harms (e.g. bullying, assault and stalking) can and do shift to digital social media and gaming platforms. XR technologies becoming more mainstream has led to investigations of ethical and technical consequences of these expanded input surfaces. However, there is limited literature that investigates social attacks, particularly toward vulnerable communities, and how AI technologies may accelerate generative attacks-at-scale. This paper employs human-centred research methods and a harms-centred Cybersecurity framework to co-design a testbed of socio-technical attack scenarios in XR social gaming platforms. It uses speculative fiction to further extrapolate how these could reach attacks-at-scale by applying generative AI techniques. It develops an Inter-Reality Threat Model to outline how actions in virtual environments can impact on the real-world. As AI capability continues to rapidly develop, this paper articulates the urgent need to consider a future where XR-AI attacks-at-scale could become commonplace.
Read full abstract