Abstract

With recent advances in machine learning and big data, it is now possible to create synthetic images that look real. Face generation is often of particular interest, as faces can be used for various purposes. However, improper use of such content can lead to the dissemination of false information, such as fake news, and thus pose a threat to society. This work studies whether people believe the truthfulness of faces using eye tracking and self-reports, including free-form textual explanations when participants encounter real and computer-generated faces. We used three different datasets for our evaluations, and our experimental results show that while people are relatively better at identifying the truthfulness of real faces and faces generated by earlier machine learning algorithms with different gazing behaviors in viewing and rating phases, they perform less accurately when deciding the truthfulness of synthetic face images that are generated by newer algorithms. Our findings provide important insights for society and policymakers.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.