Abstract

Mobile devices have become an increasingly ubiquitous part of our everyday life. We use mobile services to perform a broad range of tasks (e.g., booking travel or conducting remote office work), leading to often lengthy interactions with several distinct apps and services. Existing mobile systems handle mostly simple user needs, where a single app is taken as the unit of interaction. To understand users’ expectations and to provide context-aware services, it is important to model users’ interactions with their performed task in mind. To provide a comprehensive picture of common mobile tasks, we first conduct a small-scale user study to understand annotated mobile tasks in-depth, while we demonstrate that by using a set of features (temporal, similarity, and log sequence), we can identify if a pair of app usage belong to the same task effectively. Secondly, the proposed best task detection model is applied to a large-scale data set of commercial mobile app usage logs to infer characteristics of complex (multi-app) mobile tasks in the wild. By applying an unsupervised learning framework, we discover common mobile task types that span multiple apps based on various extracted characteristics. We observe that users generally perform 17 common tasks with 47 sub-tasks, ranging from “social media browsing” to “dining out” and “family entertainments”. Finally, we demonstrate that we can predict the next complex mobile task that users are likely to perform by leveraging features from the historically inferred mobile tasks and user contexts. Our work facilitates an in-depth understanding of mobile tasks at scale, enabling applications for promoting task-aware services.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call