Purpose: Spaced repeated testing over time results in better long-term knowledge retention than repeated study of the same material. It is particularly effective when feedback is provided, initial repetitions occur early, and answering questions requires application of knowledge through use of short-answer or context-rich multiple-choice questions. 1–4 American Board of Medical Specialties boards incorporate longitudinal knowledge assessments in their continuing certification programs, 5 but most have not yet systematically incorporated spaced repetition. The goal of this study is to compare the effect of 5 different spaced repetition strategies on American Board of Family Medicine (ABFM) Diplomate knowledge retention and transfer of knowledge from one clinical scenario to another. We report the study methods and will report preliminary results from this ongoing work. Approach: Physicians participating in the ABFM Continuing Knowledge Self-Assessment (CKSA) receive 25 questions per quarter. After answering each question, they rate their confidence in their answer on a 6-point scale before receiving feedback, an educational critique, and list of appropriate references. Twenty-six thousand two hundred fifty-five family physicians who completed the CKSA in the fourth quarter of 2020 (baseline period) were eligible for study inclusion. Participants were randomized to a control group or 1 of 5 spaced repetition conditions over the subsequent 5 calendar quarters (January 1, 2021, to March 31, 2022). Control group participants received no repeated questions during this time period. Participants in the other 5 groups received 6 questions repeated either once or twice; the interval between repetitions differed between each group. Incorrectly answered baseline questions were prioritized for spaced repetition—those answered extremely confidently received highest priority, with decreasing priority for those questions answered with lesser degrees of confidence. If necessary, correctly answered questions could then be selected for spaced repetition, with higher priority for questions not answered confidently (e.g., “guesses”) than those answered more confidently. Physicians remain in the study unless opting out or failing to complete all 25 CKSA questions in a quarter in which they were scheduled to receive repeated questions. In quarter 6 of the study, all remaining participants will receive their 6 repeated questions. The primary analysis will compare differences in enduring learning (the percentage of incorrectly answered baseline questions subsequently answered correctly during quarter 6) between physicians receiving any spaced repetition questions in quarters 1–5 compared with the control group, who will have gone 18 months without seeing repeated questions. Subgroup analyses will compare differences in learning between physicians receiving one repetition compared with those receiving 2, differences in learning between the 2 single-spaced repletion strategies, and differences in learning between the 3 double-spaced repetition strategies. Cloned questions emphasize the same learning points as the original (base) question and are designed to measure the transfer of knowledge from one clinical situation to another. In quarter 8 of the study, physicians in all groups will receive questions cloned from their selected baseline questions. We will then examine the effects of different numbers and intervals of spaced repetition on transfer of knowledge from one clinical scenario to another. Outcomes: We will present data on participant retention rates through the first 6 study quarters, and preliminary analysis of learning in the different groups. Significance: To date, spaced repetition has not been systematically evaluated in large cohorts of practicing physicians. This study can inform the development and utilization of spaced repetition strategies by health professions educators across the continuum. The results of this study will help ABFM determine a strategy for and the potential added value of spaced repetition in the lifelong learning and self-assessment component of continuous certification. Acknowledgments: The authors wish to thank Zachary Morgan, Emily Banik, Matt Wilhoite, and Prasad Chodavarapu for their assistance with data acquisition and analysis.