Abstract
The evolution of artificial intelligence (AI) facilitates the creation of multimodal information of mixed quality, intensifying the challenges individuals face when assessing information credibility. Through in-depth interviews with users of generative AI platforms, this study investigates the underlying motivations and multidimensional approaches people use to assess the credibility of AI-generated information. Four major motivations driving users to authenticate information are identified: expectancy violation, task features, personal involvement, and pre-existing attitudes. Users evaluate AI-generated information’s credibility using both internal (e.g. relying on AI affordances, content integrity, and subjective expertise) and external approaches (e.g. iterative interaction, cross-validation, and practical testing). Theoretical and practical implications are discussed in the context of AI-generated content assessment.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.