ABSTRACT Among its many applications, artificial intelligence (AI) can be used to manipulate audiovisual content for malicious intents. Such manipulations – which are commonly known as deepfakes – present a significant obstacle to maintaining truth in sectors that serve the public interest, and it is necessary to take proactive fact-checking actions to respond to the threat they pose. The current study conducted an online experiment focusing on the topic of cancer prevention. In this experiment, we further examined the influence of individual differences, such as issue relevance and motns, in health information consumption. When compared to textual disinformation, health-related audiovisual deepfakes were found to have a significant effect on increasing misperceptions but no such effect on fact-checking intentions. We also found that exposure to such deepfakes discouraged individuals with high issue relevance from engaging in fact-checking. Deepfakes were also shown to have a particularly potent effect on increasing misperceptions among individuals with high (illusory) accuracy motivations. These findings underscore the need for increased awareness regarding the detrimental effects of health deepfakes in particular and the urgent importance of further elucidating individual variations to ultimately develop more comprehensive approaches to combat deepfakes.
Read full abstract