With recent advances in machine learning and big data, it is now possible to create synthetic images that look real. Face generation is often of particular interest, as faces can be used for various purposes. However, improper use of such content can lead to the dissemination of false information, such as fake news, and thus pose a threat to society. This work studies whether people believe the truthfulness of faces using eye tracking and self-reports, including free-form textual explanations when participants encounter real and computer-generated faces. We used three different datasets for our evaluations, and our experimental results show that while people are relatively better at identifying the truthfulness of real faces and faces generated by earlier machine learning algorithms with different gazing behaviors in viewing and rating phases, they perform less accurately when deciding the truthfulness of synthetic face images that are generated by newer algorithms. Our findings provide important insights for society and policymakers.
Read full abstract