Abstract

This paper presents UXmood, a tool that provides quantitative and qualitative information to assist researchers and practitioners in the evaluation of user experience and usability. The tool uses and combines data from video, audio, interaction logs and eye trackers, presenting them in a configurable dashboard on the web. The UXmood works analogously to a media player, in which evaluators can review the entire user interaction process, fast-forwarding irrelevant sections and rewinding specific interactions to repeat them if necessary. Besides, sentiment analysis techniques are applied to video, audio and transcribed text content to obtain insights on the user experience of participants. The main motivations to develop UXmood are to support joint analysis of usability and user experience, to use sentiment analysis for supporting qualitative analysis, to synchronize different types of data in the same dashboard and to allow the analysis of user interactions from any device with a web browser. We conducted a user study to assess the data communication efficiency of the visualizations, which provided insights on how to improve the dashboard.

Highlights

  • User acceptance of certain products, services or techniques is vital for their adoption [1], and one way to understand users’ opinions is to conduct tests that measure acceptance with quantitative or qualitative metrics

  • This paper is organized as follows: Section 2 presents background information on some fundamental concepts of this research, namely usability, user experience, multimodal sentiment analysis, eye-tracking and information visualization (InfoVis) techniques; Section 3 shows related works, comparing UXmood with existing tools and methods; Section 4 describes the architecture of the tool and explains how the data is uploaded, processed and viewed through various InfoVis techniques and interactions; Section 5 presents the methodology of a user study to assess the efficiency of UXmood in communicating information; Section 6 discuss the findings of the user study; and Section 7 concludes the paper by discussing the UXmood features and providing future directions of work

  • This section includes some background on usability, user experience, multimodal sentiment analysis and data visualization

Read more

Summary

Introduction

User acceptance of certain products, services or techniques is vital for their adoption [1], and one way to understand users’ opinions is to conduct tests that measure acceptance with quantitative or qualitative metrics. Current technologies allow UX evaluators to collect a large set of data during system tests—such as videos, audios and interaction logs—to find patterns and gain insights on user experience Given these data, several automatic extraction methods can be used to generate data about users’ emotions [6], classifying them as anger, sadness, happiness, fear, surprise, disgust, contempt, or neutrality [7]. This paper is organized as follows: Section 2 presents background information on some fundamental concepts of this research, namely usability, user experience, multimodal sentiment analysis, eye-tracking and information visualization (InfoVis) techniques; Section 3 shows related works, comparing UXmood with existing tools and methods; Section 4 describes the architecture of the tool and explains how the data is uploaded, processed and viewed through various InfoVis techniques and interactions (such as filters and on-demand details); Section 5 presents the methodology of a user study to assess the efficiency of UXmood in communicating information; Section 6 discuss the findings of the user study; and Section 7 concludes the paper by discussing the UXmood features and providing future directions of work

Theoretical Foundation
Usability
User Experience
Multimodal Sentiment Analysis
Information Visualization Techniques
Related Works
Evaluation Type
UXmood
Architecture
Technologies
Functionalities
Visualization Dashboard
Media and Log Synchronization
Sentiment Classification
Usage Example
Evaluation Methodology
Results and Discussion
Final Remarks and Future Works
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call