Abstract

Clinical practice still relies heavily on traditional paper-and-pencil testing to assess a patient’s cognitive functions. Digital technology has the potential to be an efficient and powerful alternative, but for many of the existing digital tests and test batteries the psychometric properties have not been properly established. We validated a newly developed digital test battery consisting of digitized versions of conventional neuropsychological tests. Two confirmatory factor analysis models were specified: a model based on traditional neuropsychological theory and expert consensus and one based on the Cattell-Horn-Carroll (CHC) taxonomy. For both models, the outcome measures of the digital tests loaded on the cognitive domains in the same way as established in the neuropsychological literature. Interestingly, no clear distinction could be made between the CHC model and traditional neuropsychological model in terms of model fit. Taken together, these findings provide preliminary evidence for the structural validity of the digital cognitive test battery.

Highlights

  • Neuropsychological tests are an invaluable part of the clinician’s assessment toolbox when there is reason to suspect an impairment in someone’s cognitive functioning

  • Transformations were necessary for the task completion times of the Trail Making Test (TMT), Stroop, Star Cancellation-Test (SCT) and O-Cancellation Test (OCT)

  • Modification indices for both models indicated that the inclusion of a covariance specification between OCT and SCT would improve the models

Read more

Summary

Introduction

Neuropsychological tests are an invaluable part of the clinician’s assessment toolbox when there is reason to suspect an impairment in someone’s cognitive functioning. Digital cognitive testing effectively addresses some of these issues (Bauer et al, 2012; Riordan et al, 2013; Zygouris and Tsolaki, 2015; Feenstra et al, 2017; Galindo-Aldana et al, 2018; Germine et al, 2019; Kessels, 2019). It has often been argued that the psychometric properties of many digital tests have not been properly established (e.g., Schlegel and Gilliland, 2007; Wild et al, 2008; Bauer et al, 2012). The evidence for agreement between paper and digital tests is mixed at best, with some studies showing no performance differences between paper-and-pencil

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call