Abstract

Context modeling and recognition represent complex tasks that allow mobile and ubiquitous computing applications to adapt to the user’s situation. The real advantage of context-awareness in mobile environments mainly relies on the prompt system’s and applications’ reaction to context changes. Current solutions mainly focus on limited context information generally processed on centralized architectures, potentially exposing users’ personal data to privacy leakage, and missing personalization features. For these reasons on-device context modeling and recognition represent the current research trend in this area. Among the different information characterizing the user’s context in mobile environments, social interactions and visited locations remarkably contribute to the characterization of daily life scenarios. In this paper we propose a novel, unsupervised and lightweight approach to model the user’s social context and her locations based on ego networks directly on the user mobile device. Relying on this model, the system is able to extract high-level and semantic-rich context features from smartphone-embedded sensors data. Specifically, for the social context it exploits data related to both physical and cyber social interactions among users and their devices. As far as location context is concerned, we assume that it is more relevant to model the familiarity degree of a specific location for the user’s context than the raw location data, both in terms of GPS coordinates and proximity devices. We demonstrate the effectiveness of the proposed approach with 3 different sets of experiments by using 5 real-world datasets collected from a total of 956 personal mobile devices. Specifically, we assess the structure of the social and location ego networks, we provide a semantic evaluation of the proposed models and a complexity evaluation in terms of mobile computing performance. Finally, we demonstrate the relevance of the extracted features by showing the performance of 3 different machine learning algorithms to recognize daily-life situations, obtaining an improvement of 3% of AUROC, 9% of Precision, and 5% in terms of Recall with respect to use only features related to physical context.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call