Abstract

AbstractBackgroundRemote and decentralized approaches to clinical outcome assessments in clinical trials can reduce patient and trial burden, improve recruitment and retention, and lower barriers to trial participation. Remote assessment may be facilitated through smartphone‐based cognitive assessment, especially in Bring‐Your‐Own‐Device (BYOD) trials, and can also enable novel designs, for example, the use of high frequency ‘burst’ assessments. Application of such digital assessments in remote or unsupervised settings requires understanding of potential error from test administration and delivery platform (e.g., smartphones vs. computer).MethodA series of studies explored the modification of the well‐validated Cogstate Brief Battery (CBB) to include practice trials with dynamic feedback to support unsupervised cognitive assessment, and adaptations for smartphone delivery. Data were drawn from healthy populations in (1) the Healthy Brain Project, where middle‐aged adults (n = 1,594) completed unsupervised cognitive assessments on a computer, (2) a pilot study of young adults (n = 60) who completed both smartphone and computer administration, and (3) a large (n = 35,000) study of adults who completed unsupervised smartphone‐based cognitive assessments in a BYOD context.ResultData from middle‐aged adults enrolled in the Healthy Brain Project indicated that adaptation of the CBB for unsupervised computer‐based assessment had high levels of acceptability (98% complete data) and usability (95%), with data meeting criteria for low error rates and ease of understanding). In young adults, rates of test completion were high and comparable (>98%), and performance accuracy (d’s <0.2) was equivalent between smartphone and computer administration. Performance was systematically slower (d’s >0.4) on smartphone than computer. Smartphone usability was high (e.g., >85% found text and button size to be ‘just right’), but there was not a strong preference for smartphone versus computer (56% preferred smartphone) and different issues of fatigue, distraction, and use of keyboard input were raised for the platforms. Similar findings of outcome consistency across platform were observed in the large BYOD sample (N>35,000).ConclusionThese results indicate that smartphone assessment has high acceptability, good reliability, and that accuracy of performance is equivalent between smartphone and computer versions. The data also raise important issues for the successful adaptation and modification of cognitive tasks for smartphone administration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call