AbstractBackgroundFrequent, remote cognitive assessments may improve sensitivity to subtle cognitive decline associated with preclinical Alzheimer’s disease (AD). To gain insight on how use of this technology can be optimized, the objective of this study is to determine if feasibility, satisfaction, or test‐retest reliability (TRR) vary depending on task frequency schedules or demographic characteristics.MethodParticipants were recruited from a longitudinal aging cohort to complete three memory tasks via smartphone or tablet using the neotiv application: Object‐In‐Room Recall (ORR), Mnemonic Discrimination for Objects and Scenes (MDT‐OS), and Complex Scene Recognition (CSR). Participants were randomly assigned to complete 24 remote sessions over one year either: (1) every other week or (2) for multiple days within one week every other month. Each session lasted approximately 10 minutes, including one memory task followed by a delayed retrieval session 90 minutes later. Chi‐square tests were used to assess differences in feasibility, acceptability for participants who completed at least 3 sessions, and satisfaction based on task schedule, gender, age, or education. ICC was used to determine TRR of visit 2 to 3 (“V2‐V3”) and an average of visits 1+2 and 3+4 (“V1+2‐V3+4”).Result115 participants (mean age = 68.4, 66% women) enrolled; 49 completed all remote sessions to date (Table 1). Willingness to enroll differed by education (X2 = 4.2, p = .04), such that participants with at least a bachelor’s degree were more likely to enroll. Withdrawal from the study occurred more often for those scheduled to the every other week condition (X2 = 6.2, p = .01) than the every other month condition. Measures of acceptability and satisfaction did not differ by task schedule or demographics. TRR from V2‐V3 and V1+2‐V3+4 were adequate, (CSR: ICC = 0.56 and 0.64, ORR: ICC = 0.58 and 0.66, MDT‐OS: ICC = 0.38 and 0.52).ConclusionParticipants with a bachelor’s degree or higher were more likely to enroll. Reported satisfaction was similar for both task frequency schedules; however, drop‐outs occurred more frequently in those assigned to complete tasks every other week, which could suggest that it is a less optimal task schedule. TRR was moderate for all three tasks. Altogether, results suggest that repeated mobile cognitive testing can be utilized to track cognitive performance remotely.