Abstract

This chapter presents an AI-based approach to the systemic collection of user experience data for further analysis. This is an important task because user feedback is essential in many use cases, such as serious games, tourist and museum applications, food recognition applications, and other software based on Augmented Reality (AR). For AR game-based learning environments user feedback can be provided as Multimodal Learning Analytics (MMLA) which has been emerging in the past years as it exploits the fusion of sensors and data mining techniques. A wide range of sensors have been used by MMLA experiments, ranging from those collecting students’ motoric (relating to muscular movement) and physiological (heart, brain, skin, etc.) behaviour, to those capturing social (proximity), situational, and environmental (location, noise) contexts in which learners are placed. Recent research achievements in this area have resulted in several techniques for gathering user experience data, including eye-movement tracking, mood tracking, facial expression recognition, etc. As a result of user’s activity monitoring during AR-based software use, it is possible to obtain temporal multimodal data that requires rectifying, fusion, and analysis. These procedures can be based on Artificial Intelligence, Fuzzy logic, algebraic systems of aggregates, and other approaches. This chapter covers theoretical and practical aspects of handling AR user’s experience data, in particular, MMLA data. The chapter gives an overview of sensors, tools, and techniques for MMLA data gathering as well as presenting several approaches and methods for user experience data processing and analysis.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call