Abstract

ABSTRACT ChatGPT, which launched only a year ago, is the fastest-growing website in the world today. When generative AI software such as ChatGPT generates ideas for people, they often generate false ideas. This occurrence has been called ‘AI Hallucination’. It can include generating false text output that is extremely believable to completely gibberish. This source of potential misinformation has significant potential implications for the travel and tourism industry. Using survey responses from 900 consumers, this empirical study contributes to theorizing and examination of how consumers’ awareness of AI Hallucination potential combines with existing concepts from the Technology Acceptance Model (TAM) and Theory of Planned Behaviour (TPB) when it comes to the decision to use generative AI platforms such as ChatGPT for tourism planning. This research also examines if the consumers are actually able to discern AI Hallucination and why they select to use AI technologies over other tourism information sources, such as aggregated peer review websites like TripAdvisor, government tourism websites, or social media influencers. The results indicate that many consumers chose error-filled AI tourism itineraries over other options because they trust the AI to be more impartial and customized than the other sources.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.