Concern over the impact of fake news on major socio-political events is growing. The use of deliberate misinformation is thought to have played a role in the outcome of the UK EU referendum, the 2016 US presidential election, and in the effectiveness of COVID-19 public health messaging. As a result, recent research has tended to focus on hyper-partisan (e.g., US politics; Democrat/Republican), person specific (e.g., Hillary Clinton/Donald Trump) content that incorporates emotive and hyperbolic language. However, in this study, we focus on an alternative form of fake news, across a variety of topics (e.g., Crime, Immigration, and Health), that avoids these characteristics, and which may therefore be more pervasive and difficult to detect. In a three-part study, we examined participants sharing intentions for fake news (including platform preference; Facebook, Twitter, Instagram, and WhatsApp), their ability to explicitly detect fake news, and whether individual differences on psychological measures of critical thinking ability, rational thinking, and emotional stability predict sharing behavior and detection ability. The results show that even our well-informed sample (political science students) were not immune to the effects of fake news, some issues (e.g., health and crime) were more likely to be shared than others (e.g., immigration), and on specific platforms (e.g., Twitter, Facebook). In addition, we show that individual differences in emotional stability appears to be a key factor in sharing behavior, while rational thinking aptitude was key to fake news detection. Taken together, this study provides novel data that can be used to support targeted fake news interventions, suggesting possible news topic, sharing behavior, and platform specific insights. Such interventions, and implications for government policy, education, and social media companies are discussed.
Read full abstract