Abstract
Augmented Reality (AR) is revolutionizing the shopping experience by allowing consumers to interact with virtual products in real-time. Intent prediction – the mechanism of predicting a consumer’s intention based on their behavioral patterns and actions – is crucial for enhancing the personalization of AR shopping environments. This paper explores how multimodal interactions, including voice commands, gesture recognition, and eye tracking, can be integrated into AR shopping experiences to predict user intent more effectively. We review current advancements in multimodal interaction systems, discuss the importance of intent prediction in AR, and assess the impact of combining multiple input modalities on prediction accuracy. Our research identifies the challenges and future directions for intent prediction in AR shopping landscapes, aiming to improve user engagement, personalization, and the overall shopping experience.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have