Abstract

The rise of social media has democratized content creation and has made it easy for everybody to share and spread information online. On the positive side, this has given rise to citizen journalism, thus enabling much faster dissemination of information compared to what was possible with newspapers, radio, and TV. On the negative side, stripping traditional media from their gate-keeping role has left the public unprotected against the spread of misinformation, which could now travel at breaking-news speed over the same democratic channel. This has given rise to the proliferation of false information specifically created to affect individual people’s beliefs, and ultimately to influence major events such as political elections. There are strong indications that false information was weaponized at an unprecedented scale during Brexit and the 2016 U.S. presidential elections. “Fake news,” which can be defined as fabricated information that mimics news media content in form but not in organizational process or intent, became the Word of the Year for 2017, according to Collins Dictionary. Thus, limiting the spread of “fake news” and its impact has become a major focus for computer scientists, journalists, social media companies, and regulatory authorities. The tutorial will offer an overview of the broad and emerging research area of disinformation, with focus on the latest developments and research directions.

Highlights

  • The rise of social media has democratized content creation and has made it easy for anybody to share and to spread information online. This has given rise to citizen journalism, enabling much faster dissemination of information compared to what was possible with newspapers, radio, and TV

  • Limiting the impact of these negative developments has become a major focus for journalists, social media companies, and regulatory authorities

  • Other recent surveys focus on stance detection (Kucuk and Can, 2020), on propaganda (Da San Martino et al, 2020b), on social bots (Ferrara et al, 2016), on false information (Zannettou et al, 2019b) and on bias on the Web (Baeza-Yates, 2018)

Read more

Summary

Outline of the Tutorial

The rise of social media has democratized content creation and has made it easy for anybody to share and to spread information online. The tutorial offers an overview of the emerging and inter-connected research areas of factchecking, misinformation, disinformation, “fake news”, propaganda, and media bias detection, with focus on text and on computational approaches. It further explores the general fact-checking pipeline and important elements thereof such as checkworthiness estimation, spotting previous factchecked claims, stance detection, source reliability estimation, and detecting malicious users in social media. 2.6 Stance Detection (i) Task definitions and examples (ii) Datasets (iii) Stance detection as a key element of factchecking (iv) Information sources: text, social context, user profile (v) Tasks and approaches (a) neural methods for stance detection (b) cross-language stance detection (vi) Shared tasks at SemEval and the Fake News Challenge. 2.10 Recent Developments and Future Challenges (i) Deep fakes: images, voice, video, text (ii) Text generation: GPT-2, GPT-3, GROVER (iii) Defending against neural fake news (iv) Fighting the COVID-19 Infodemic

Reading List
Preslav Nakov
Giovanni Da San Martino
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.