Abstract
Social media is increasingly being used as a source of information during crises, such as natural disasters and civil unrests. However, the quality and truthfulness of user-generated information on social media have been a cause of concern. Many users find distinguishing between true and false information on social media difficult. Basing on the elaboration likelihood model and the motivation, opportunity, and ability framework, this study proposes and empirically tests a model that identifies the information processing routes through which users develop trust, as well as the factors that influence the use of these routes. The findings from a survey of Twitter users seeking information about the Fukushima Daiichi nuclear crisis indicate that individuals evaluate information quality more when the crisis information has strong personal relevance or when individuals have low anxiety about the crisis. By contrast, they rely on majority influence more when the crisis information has less personal relevance or when these individuals have high anxiety about the crisis. Prior knowledge does not have significant moderating effects on the use of information quality and majority influence in forming trust. This study extends the theorization of trust in user-generated information by focusing on the process through which users form trust. The findings also highlight the need to alleviate anxiety and manage non-victims in controlling the spread of false information on social media during crises.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.