In the ecosystem of artificial intelligence (AI), generative models enable the creation of hyper-realistic manipulations that are extremely plausible due to the precision of the audiovisual objects. These deepfakes are undetectable thanks to their components, which heightens concerns about the distortion of reality in the information ecosystem and how the ability to distinguish between real and fake audiovisual content affects public trust and democratic systems. This is a major challenge for media and information literacy if it is to combat misinformation effectively. In this context, this study presents the results of a quasi-experiment conducted with 80 young people from the Community of Madrid (Spain) to assess their ability to detect deepfakes in immersive environments and to establish whether the context-identifying elements that enable detection of the reputation of the media source shape the credibility of the images. The results show that the images take precedence over the context identifiers, preventing a critical reading of the information that would make it possible to detect visual forgeries, something that is reinforced by their exceptional verisimilitude. It is concluded that the new post-humanist biome of virtual reality and artificial intelligence requires a reorientation of media and information literacy to raise the public’s awareness and educate them to make them less susceptible to disinformation based on deepfakes created with generative models.
Read full abstract