Abstract
Extended Reality (XR) platforms can expose users to novel attacks including embodied abuse and/or AI attacks-at-scale. The expanded attack surfaces of XR technologies may expose users of shared online platforms to psychological/social and physiological harms via embodied interactions with potentially millions of other humans or artificial humans, causing what we define as an inter-reality attack. The past twenty years have demonstrated how social and other harms (e.g. bullying, assault and stalking) can and do shift to digital social media and gaming platforms. XR technologies becoming more mainstream has led to investigations of ethical and technical consequences of these expanded input surfaces. However, there is limited literature that investigates social attacks, particularly toward vulnerable communities, and how AI technologies may accelerate generative attacks-at-scale. This paper employs human-centred research methods and a harms-centred Cybersecurity framework to co-design a testbed of socio-technical attack scenarios in XR social gaming platforms. It uses speculative fiction to further extrapolate how these could reach attacks-at-scale by applying generative AI techniques. It develops an Inter-Reality Threat Model to outline how actions in virtual environments can impact on the real-world. As AI capability continues to rapidly develop, this paper articulates the urgent need to consider a future where XR-AI attacks-at-scale could become commonplace.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.