Abstract

The term deepfake was first used in a Reddit post in 2017 to refer to videos manipulated using artificial intelligence techniques and since then it is becoming easier to create such fake videos. A recent investigation by the cybersecurity company Deeptrace in September 2019 indicated that the number of what is known as fake videos had doubled in the last nine months and that most were pornographic videos used as revenge to harm many women. The report also highlighted the potential of this technology to be used in political campaigns such as in Gabon and Malaysia. In this sense, the phenomenon of deepfake has become a concern for governments because it poses a short-term threat not only to politics, but also for fraud or cyberbullying. The starting point of this research was Twitter’s announcement of a change in its protocols to fight fake news and deepfakes. We have used the Social Network Analysis technique, with visualization as a key component, to analyze the conversation on Twitter about the deepfake phenomenon. NodeXL was used to identify main actors and the network of connections between all these accounts. In addition, the semantic networks of the tweets were analyzed to discover hidden patterns of meaning. The results show that half of the actors who function as bridges in the interactions that shape the network are journalists and media, which is a sign of the concern that this sophisticated form of manipulation generates in this collective.

Highlights

  • The recent upsurge in artificial intelligence (AI), along with image processing and machine learning, have made deepfake production possible

  • This article may help people understand how different actors try to shape and ‘crystalize’ our understanding of the emerging issue, as well as mapping the most important actors in this debate. It contains the following specific objectives: 1) Identify the main actors and research which ones hold a greatest advantage when controlling the spread of messages—all actors who posted messages containing the term deepfake or who were replied to or mentioned in those messages have been examined; and 2) analyze the semantic network arising around the search term deepfake and discover which content predominates in messages

  • This study shows the network woven around the term deepfake after Twitter’s announcement that it was tightening its protocols to fight fake news and videos

Read more

Summary

Introduction

The recent upsurge in artificial intelligence (AI), along with image processing and machine learning, have made deepfake production possible. The previous US leader had said nothing, it was his image that appeared in the video. The person who made it was actor Jordan Peele. He sought to sound the alarm on how dangerously easy it was to use new technologies to manipulate and falsify someone’s identity. Can potentially undermine truth, confuse citizens and falsify reality. It may worsen issues related to disinformation and conspiracy theories (Hasan & Salah, 2019). They could even be weaponized to unleash national or international crises (Stover, 2018)

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.