Abstract

In recent years, social media disinformation had a significant impact on real-world events. Consequently, to fight disinformation, a large number of fake news detection models have been proposed. However, the theory behind these models has become increasingly sophisticated and complex. Thus, despite the high precision, most of these systems classify text without explaining why since they inherently use advanced and complex technology that is not understandable to humans. In the particular case of disinformation, users are already susceptible to their prior beliefs (i.e., preconceived bias). Consequently, without a proper aid to understand the classification of a certain text, users' trust in these models is likely to be small. Therefore, we propose a reliability detection application for Twitter messages that not only produces a classification but also attempts to explain it by providing a set of graphical cues commonly used to differentiate between reliable and unreliable content.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.