Abstract

Mobile technologies present new opportunities to develop personalized health interventions that respond to changes in behaviors and states. Such interventions are powered by computational models that need information on health behaviors in the real world. Ideally, sensors embedded on mobile devices could measure these behaviors. But we currently need self-report to capture subjective experiences that sensors cannot measure directly (e.g., fatigue and pain). Ecological momentary assessment (EMA) is one such approach that enables in-situ self-report data collection using smartphones. In EMA, participants are prompted several times a day on their phones to answer sets of multiple-choice questions. While the repeated nature of EMA reduces recall bias, it may induce user-burden. Thus, there is a need to explore methods complementary to EMA that are less burdensome yet provide comprehensive information on an individual's experiences. In this work, I present microinteraction ecological momentary assessment (μEMA). The μEMA method restricts EMA interruptions to single, cognitively simple questions that can be answered on a smartwatch with a single tap - a quick, glanceable microinteraction. Because all interactions are limited to this microinteraction, μEMA permits substantially higher interruption than EMA without as much burden. This work is motivated by the overarching question - can we rethink how we capture self-report at a high frequency without burdening users and yet gain a comprehensive understanding of one's behavior? To that end, my work evaluates the sustainability, compliance biases, and data validity of μEMA using data from a longitudinal study. In this study, participants answered μEMA questions (on smartwatch) and EMA questions (on smartphone) on different days, for a period of 12 months. In addition to self-report surveys, sensor data from both smartwatches and smartphones such as raw acceleration, location, and device use were collected. At the end of the study, participants self-reported their perceived burden of answering questions using μEMA and participated in a semi-structured interview describing their experience in the study. Properties of μEMA are explored using the data from the longitudinal study. First, when comparing μEMA with EMA, μEMA yielded significantly higher response rates and lower perceived burden at the end of 12 months - suggesting μEMA's sustainability. Qualitative data revealed that participants experienced burden via interruption burden and response burden, yet underlying motivators enabled them to answer questions continuously on μEMA and EMA. Second, we identified that contextual factors such as temporal variables (e.g., time of the day, day of the week, and days in study), device usage variables (e.g., media usage, battery state, and charging state), and activity/mobility variables (e.g., wrist motion and location) had a statistically significant association with momentary μEMA non-response - highlighting μEMA's compliance biases. Third, qualitative evaluation of exit interviews revealed mechanisms participants use to answer short-yet-frequent μEMA questions at the moment - providing information on μEMA's data validity and threats to its ecological validity. Finally, design opportunities for observation/intervention studies that might use μEMA for the intensive longitudinal data collection on smartwatches were identified.--Author's abstract

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.