Abstract

We at Carnegie Mellon University have pioneered context-aware mobile computing and built their first prototypes, including context-aware mobile phones and a context-aware personal communicator. These prototypes use machine learning and cognitive modeling techniques to derive user state and intent from the devices sensors. Context-aware computing describes the situation where a mobile computer is aware of its user's state and surroundings and modifies its behavior based on this information. We have demonstrated the power of our method to automatically derive a meaningful user context model and performed experimental measurements and evaluation. We have employed unsupervised machine learning techniques to combine real time data from multiple sensors into a model of behavior that is individualized to the user. We observe that context does not require a descriptive label to be used for adaptivity and contextually sensitive response. This makes our approach towards completely unsupervised machine learning feasible. By unsupervised learning we mean the identification of the users' context without requiring manually annotating current user states. We use unsupervised machine learning techniques to independently cluster sensor quantities and associate user interactions with these clusters. The use of this discretization enables learning from observations about the user. Each time a user interaction is observed, it is interpreted as a labeled example which can be used to construct a statistical model for context-dependent preferences. Example context-aware parameters are the following: location, nearby people and devices, calendar and other cyber sensors information, movement patterns and characteristics, user preferences, interests, and behavior patterns. By mapping observable parameters into cognitive states, the computing system can estimate the form of interaction that minimizes user distraction and the risk of cognitive overload. The capabilities herein proposed extend significantly the state-of-the-art, sometimes in a radical fashion, other times more incrementally. Our approach produces enriched observations by combining machine learning, instrumentation in software applications, sensors describing the user state, and task context information. Such diverse sensor fusion (symbolic and signal sensors) for inferring context and state goes much beyond the situation-sensing currently practiced, even in experimental settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call