Abstract

Verbal-Paired Associates (VPA) is a widely used memory task, typically administered by a trained rater, with lists of pairs of words learned over up to three attempts, followed by delayed recall. Automated scoring and administration could enable remote testing to monitor change over time but requires parallel forms of the test with equivalent psychometric properties. Here we describe the validation process and memorability characteristics of a group of 12 parallel forms of automated VPA. We generated 15 candidate VPA word-pair lists based on prior modelling. Piloting and simulation-based power calculations suggested a sample size of 375 would provide power of .8 to detect lists where the mean number of errors at the first attempt was more than one error different from the Grand Mean. We recruited 375 healthy participants aged 50-90 years via the Prolific online platform. We used a mixed counterbalanced design with the aim of evaluating the memorability characteristics of the candidate set of word-pair lists. Participants were tested twice on the VPA task using different word-pair lists, separated by a distractor task. All testing was delivered via the NeuroVocalix platform on participants' own devices at home and was administered and scored automatically using automatic speech recognition. Automated scoring results were reviewed by a human experimenter to ensure accuracy. Memory performance in individual word-lists conformed broadly to our model-based expectations, with significant correlations between predicted and observed performance (p < 0.05). We observed expected significant improvement in recall performance with repeated administration, showing learning of word pairs over the task. In this healthy sample there was only modest forgetting at delayed recall. Although performance was broadly equivalent across the different lists, the final selection of lists excluded three with the highest numeric difference in performance versus the overall grand mean, yielding 12 lists for repeated administration. Qualitative feedback showed good levels of acceptability and engagement with the task by participants. We now have a set of 12 parallel-form word-pair lists for use with the automated NeuroVocalix system, with consistent performance characteristics enabling long-term repeat testing remotely and at scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call