Abstract

This study examines the impact of artificial intelligence (AI) on the ways in which users process and respond to misinformation in generative artificial intelligence (GenAI) contexts. Drawing on the heuristic–systematic model and the concept of diagnosticity, our approach examines a cognitive model for processing misinformation in GenAI. The study’s findings revealed that users with a high-heuristic processing mechanism, which affects positive diagnostic perception, were more likely to proactively discern misinformation than users with low-heuristic processing and low-perceived diagnosticity. When exposed to misinformation from GenAI, users’ perceived diagnosticity of misinformation can be accurately predicted by the ways in which they perform heuristic systematic evaluations. With this focus on misinformation processing, this study provides theoretical insights and relevant recommendations for firms to be more resilient in protecting users from the detrimental impacts of misinformation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call