Abstract

In recent years, prevalent global societal issues related to fake news, fakery, misinformation, and disinformation were brought to the fore, leading to the construction of descriptive labels such as “post-truth” to refer to the supposedly new emerging era. Thereby, the (mis-)use of technologies such as AI and VR has been argued to potentially fuel this new loss of “ground-truth”, for instance, via the ethically relevant deepfakes phenomena and the creation of realistic fake worlds, presumably undermining experiential veracity. Indeed, unethical and malicious actors could harness tools at the intersection of AI and VR (AIVR) to craft what we call immersive falsehood, fake immersive reality landscapes deliberately constructed for malicious ends. This short paper analyzes the ethically relevant nature of the background against which such malicious designs in AIVR could exacerbate the intentional proliferation of deceptions and falsities. We offer a reappraisal expounding that while immersive falsehood could manipulate and severely jeopardize the inherently affective constructions of social reality and considerably complicate falsification processes, humans may neither inhabit a post-truth nor a post-falsification age. Finally, we provide incentives for future AIVR safety work, ideally contributing to a future era of technology-augmented critical thinking.

Highlights

  • It has been stated that the deployment of AI deepfakes may foster the acquisition of false memories [8]. This could be conceivably exacerbated within future extensions of technically already feasible “VR deepfakes” [9,10,11] by the particular aptness of VR to facilitate durable memories [12]. While such issues would already play a role regarding unintentional failure modes elicited by ethically aware actors in AIVR, recent research related to the security and safety of AI [13,14,15,16] and VR [17,18,19,20] respectively emphasizes the need to consider the presence of unethical malicious actors

  • We focus on immersive falsehood in AIVR [22], the deliberate construction of fake immersive reality landscapes for malicious ends

  • In the light of affective realism and the perceiver-dependent nature of social reality, we deconstructed the nature of the term “ground truth”

Read more

Summary

Motivation

In the last few years, the information ecosystem was permeated by falsehood-related concepts, such as fake news [1], deepfakes [2], fake realities [3], and digital fakery [4], as well as more globally fake science [5] and post-truths [6]. It has been stated that the deployment of AI deepfakes may foster the acquisition of false memories [8] This could be conceivably exacerbated within future extensions of technically already feasible “VR deepfakes” [9,10,11] by the particular aptness of VR to facilitate durable memories [12]. While such issues would already play a role regarding unintentional failure modes elicited by ethically aware actors in AIVR, recent research related to the security and safety of AI [13,14,15,16] and VR [17,18,19,20] respectively emphasizes the need to consider the presence of unethical malicious actors.

Nested Affective VR Worlds
Future Work
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call