AbstractBackgroundUse of telephone‐based cognitive assessments became more widespread during the COVID‐19 pandemic. The National Alzheimer’s Coordinating Center (NACC) developed the telephone‐based cognitive battery (T‐Cog) to ensure continued data collection and participant retention. The objective of this study was to (1) assess the feasibility and reliability of T‐Cog data collected during the pandemic, (2) evaluate these data in the context of prior in‐person data and (3) inform future use of remote data collection.MethodThe Wisconsin ADRC Clinical Core staff administered T‐Cog batteries to participants with normal cognition, mild cognitive impairment (MCI), or dementia (n = 380, mean age = 68.7±8.9, 66.1% female, 78.4% white, mean education = 16.2±2.48) (Table 1). Examiners rated the validity of the testing session and noted reasons for an invalid session. Reliability was assessed by comparing individual neuropsychological test performances from the T‐Cog to the most recent in‐person evaluation using intraclass correlation coefficients (ICC). Chi‐square was used to test if research diagnoses (normal cognition, MCI, dementia) varied by modality (T‐Cog, in‐person).ResultsMost T‐Cog sessions were rated as valid by examiners (87.8%). The most common reasons for questionable or not valid session included hearing impairments, distractions, and interruptions (Figure 1). Measures of verbal fluency (Animal Naming (ICC = 0.69), F+L Fluency (ICC = 0.73)), list‐learning and memory (RAVLT Trials 1–5 (ICC = 0.75), RAVLT Long Delay Recall (ICC = 0.70), RAVLT Recognition (ICC = 0.71)), and story memory (Craft Story‐Immediate (ICC = 0.65), Craft Story‐Delayed (ICC = 0.73)) showed moderate to excellent reliability and were significant (p<0.01), suggesting good agreement between T‐Cog and in‐person assessments (Table 2). Measures of attention (Digit Span Forward (ICC = 0.66), Digit Span Backward (ICC = 0.15)) varied from moderately reliable to not reliable, respectively. Lastly, research diagnoses did not differ by testing modality.ConclusionThese results suggest T‐Cog assessments are feasible for participants who cannot complete in‐person visits, given adequate hearing and limited distractions and interruptions to ensure test validity. Preliminary comparisons of T‐Cog vs in‐person performances suggest good reliability for verbal fluency, story memory, and list‐learning measures. Although remote assessments can expand access and diversify the population involved in AD research, identifying assessments that are both feasible and reliable across varied administrations will be essential to track longitudinal change in cognitive status.