Background Mobile cognitive testing is growing in popularity, with numerous advantages over traditional cognitive testing; however, the field lacks studies that deeply examine mobile cognitive test data from general adult samples. Objective This study characterized performance for a suite of 8 mobile cognitive tests from the NeuroUX platform in a sample of US adults across the adult lifespan. Methods Overall, 393 participants completed 8 NeuroUX cognitive tests and a brief ecological momentary assessment survey once per day on their smartphones for 10 consecutive days; each test was administered 5 times over the testing period. The tests tapped the domains of executive function, processing speed, reaction time, recognition memory, and working memory. Participants also completed a poststudy usability feedback survey. We examined alternate form test-retest reliability; practice effects; and associations between scores (averages and intraindividual variability) and demographics as well as test-taking context (ie, smartphone type, being at home vs not at home, and being alone vs not alone). Results Our final sample consisted of 393 English-speaking US residents (aged 20-79 y; female: n=198, 50.4%). Of the 367 participants who provided responses about their race and ethnicity, 258 (70.3%) were White. Of the 393 participants, 181 (46.1%) were iOS users, and 212 (53.9%) were Android users. Of 12 test scores derived from the 8 tests, 9 (75%) showed good to excellent test-retest reliability (intraclass correlation coefficients >0.76). Practice effects (ie, improvements in performance) were observed for 4 (33%) of the 12 scores. Older age was associated with worse performance on most of the test scores (9/12, 75%) and greater within-person variability for nearly all reaction time scores (3/4, 75%). Relationships with smartphone type showed better performance among iOS users and those with newer Android software versions compared to those with older software. Being at home (vs not at home) was associated with better performance on tests of processing speed. Being alone (vs not alone) was associated with better performance on tests of recognition and working memory. Poststudy feedback indicated that participants found NeuroUX easy to learn and use, an enjoyable experience, and an app that would be helpful in understanding their thinking skills. Only 4.2% (16/379) endorsed privacy concerns, and 77.3% (293/379) reported that they would be willing to share their results with their health care provider. Older age—but not other demographics—was associated with finding the tests more challenging. Conclusions In a sample of adults across a wide age range, this study characterized features that are particularly important for the interpretation of remote, repeated mobile cognitive testing performance, including test-retest reliability, practice effects, smartphone type, and test-taking context. These data enhance the understanding and application of mobile cognitive testing, paving the way for improved clinical decision-making, personalized interventions, and advancements in cognitive research.
Read full abstract