Abstract

Various populations with chronic conditions are at risk for decreased cognitive performance, making assessment of their cognition important. Formal mobile cognitive assessments measure cognitive performance with greater ecological validity than traditional laboratory-based testing but add to participant task demands. Given that responding to a survey is considered a cognitively demanding task itself, information that is passively collected as a by-product of ecological momentary assessment (EMA) may be a means through which people's cognitive performance in their natural environment can be estimated when formal ambulatory cognitive assessment is not feasible. We specifically examined whether the item response times (RTs) to EMA questions (eg, mood) can serve as approximations of cognitive processing speed. This study aims to investigate whether the RTs from noncognitive EMA surveys can serve as approximate indicators of between-person (BP) differences and momentary within-person (WP) variability in cognitive processing speed. Data from a 2-week EMA study investigating the relationships among glucose, emotion, and functioning in adults with type 1 diabetes were analyzed. Validated mobile cognitive tests assessing processing speed (Symbol Search task) and sustained attention (Go-No Go task) were administered together with noncognitive EMA surveys 5 to 6 times per day via smartphones. Multilevel modeling was used to examine the reliability of EMA RTs, their convergent validity with the Symbol Search task, and their divergent validity with the Go-No Go task. Other tests of the validity of EMA RTs included the examination of their associations with age, depression, fatigue, and the time of day. Overall, in BP analyses, evidence was found supporting the reliability and convergent validity of EMA question RTs from even a single repeatedly administered EMA item as a measure of average processing speed. BP correlations between the Symbol Search task and EMA RTs ranged from 0.43 to 0.58 (P<.001). EMA RTs had significant BP associations with age (P<.001), as expected, but not with depression (P=.20) or average fatigue (P=.18). In WP analyses, the RTs to 16 slider items and all 22 EMA items (including the 16 slider items) had acceptable (>0.70) WP reliability. After correcting for unreliability in multilevel models, EMA RTs from most combinations of items showed moderate WP correlations with the Symbol Search task (ranged from 0.29 to 0.58; P<.001) and demonstrated theoretically expected relationships with momentary fatigue and the time of day. The associations between EMA RTs and the Symbol Search task were greater than those between EMA RTs and the Go-No Go task at both the BP and WP levels, providing evidence of divergent validity. Assessing the RTs to EMA items (eg, mood) may be a method of approximating people's average levels of and momentary fluctuations in processing speed without adding tasks beyond the survey questions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call