Your residency program has shifted to a virtual interview process. This necessitates numerous changes, ranging from virtually introducing applicants to your program and meeting current residents to preparing faculty for the new interview process. Your program is proud of its track record of recruiting residents who are a good fit for the program and decides to keep the same overall method for ranking applicants. In preparing for recruitment, strong concerns were raised as to whether the virtual interview processes will continue to yield high-quality residents who fit your program as evidenced by: aligning their interests with the program's mission, having robust examination scores, embracing the values of the profession, being a good team member, and demonstrating a capacity to learn. Given this evaluation focus (ie, fit-for-program) and stakeholder-identified evidence (eg, aligned interests), what sources of data and methods of data collection do you need to evaluate your new virtual interview processes?Data should serve the information needs of stakeholders (utility) while also being accurate, feasible, and fair/ethical.1 Consider an adopt, adapt, and/or author (3A's) approach to data collection. Can you adopt data collection using existing surveys, performance data, and examination scores? Can you adapt an available tool to include items specific to your evaluation focus and evidence sought (eg, add an item on teamwork or alignment with mission to an existing rating tool)? Or can you adapt an available data set with a new analysis? For example, will you rate applicants' medical school leadership roles (as previously found leadership experiences correlate with a good “fit” with your residency mission) or analyze comments on interview rating forms? Your last resort is to author a new data collection strategy, if it is feasible (eg, realistic, cost-effective) and accurate (eg, valid, reliable). If you must author a new strategy, it should provide evidence that is not otherwise available.