Abstract

PurposeLearning how to identify and avoid inaccurate information, especially disinformation, is essential for any informational consumer. Many information literacy tools specify criteria that can help users evaluate information more efficiently and effectively. However, the authors of these tools do not always agree on which criteria should be emphasized, what they mean or why they should be included in the tool. This study aims to clarify two such criteria (source credibility and soundness of content), which evolutionary cognitive psychology research emphasize. This paper uses them as a basis for building a question-based evaluation tool and draws implications for information literacy programs.Design/methodology/approachThis paper draws on cross-disciplinary scholarship (in library and information science, evolutionary cognitive psychology and rhetoric studies) to explore 15 approaches to information evaluation which conceptualizes source credibility and content soundness, two markers of information accuracy. This paper clarifies these two concepts, builds two sets of questions meant to elicit empirical indicators of information accuracy and deploys them against a recent piece of journalism which embeds a conspiracy theory about the origins of the COVID-19 pandemic. This paper shows how the two standards can help us determine that the article is misleading. This paper draws implications for information literacy programs.FindingsThe meanings of and relationships between source credibility and content soundness often diverge across the 15 approaches to information evaluation this paper analyzed. Conceptual analysis allowed the authors to articulate source credibility in terms of authority and trustworthiness, and content soundness in terms of plausibility and evidential support. These conceptualizations allow the authors to formulate two respective sets of appropriate questions, the answers to which are meant to function as empirical indicators for the two standards. Deploying this instrument provides us with the opportunity to understand why a certain article discussing COVID-19 is misleading.Originality/valueBy articulating source credibility and content soundness as the two key criteria for evaluating information, together with guiding questions meant to elicit empirical indicators for them, this paper streamlines the process through which information users can judge the likelihood that a piece of information they encounter is accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call