Abstract
Despite all the fact-checking initiatives on news and social media aimed at countering misinformation, they remain insufficient to promptly address the wide array of misleading information disseminated by both news and social media outlets. Rather than attempting to identify or filter misleading information, this work advocates new tools for assisting online readers in identifying misinformation among the massive online content pushed every day through multiple platforms. We introduce DOMAIN, an article assessment resource bundle comprising a multidimensional indicator to categorize articles into different types (hard news, soft news, opinion, satire, and conspiracy), a set of explanatory metrics to help users understand the results, a tool for verifying the reliability of the article’s source, and a text summary of the assessment. This work also studies how DOMAIN tools impact online readers, specifically focusing on i) understanding the extent to which computer-generated assessments influence human perceptions of credibility; ii) evaluating the effectiveness of automatic article categorization in human assessment of credibility; and iii) identifying the most relevant explanatory metrics for promoting informed and critical consumption of information.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.