Abstract

Deepfake videos are becoming more pervasive. In this preregistered online experiment, participants (N = 454, Mage = 37.19, SDage = 13.25, males = 57.5%) categorize a series of 20 videos as either real or deepfake. All participants saw 10 real and 10 deepfake videos. Participants were randomly assigned to receive a list of strategies for detecting deepfakes based on visual cues (e.g., looking for common artifacts such as skin smoothness) or to act as a control group. Participants were also asked how confident they were that they categorized each video correctly (per video confidence) and to estimate how many videos they correctly categorized out of 20 (overall confidence). The sample performed above chance on the detection activity, correctly categorizing 60.70% of videos on average (SD = 13.00). The detection strategies intervention did not impact detection accuracy or detection confidence, with the intervention and control groups performing similarly on the detection activity and showing similar levels of confidence. Inconsistent with previous research, the study did not find that participants had a bias toward categorizing videos as real. Participants overestimated their ability to detect deepfakes at the individual video level. However, they tended to underestimate their abilities on the overall confidence question.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call