Abstract
While fact-checking has received much attention as a tool to fight misinformation online, fact-checking efforts have yielded limited success in combating political misinformation due to partisans’ biased information processing. The efficacy of fact-checking often decreases, if not backfires, when the fact-checking messages contradict individual audiences’ political stance. To explore ways to minimize such politically biased processing of fact-checking messages, an online experiment (N = 645) examined how different source labels of fact-checking messages (human experts vs. AI vs. crowdsourcing vs. human experts-AI hybrid) influence partisans’ processing of fact-checking messages. Results showed that AI and crowdsourcing source labels significantly reduced motivated reasoning in evaluating the credibility of fact-checking messages whereas the partisan bias remained evident for the human experts and human experts-AI hybrid source labels.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.