MEDICALsimulationoffersatantaliz-ing breadth and depth of potential fortraining and assessment in interven-tional radiology. It promises to providesolutions to many of the shortcom-ings of our traditional “apprentice-ship” training. Mandatory restric-tions on in-hospital work hours ofresident trainees limit the time avail-able for training and the breadth ofcase material to which the individualtrainee is exposed. At the same time,advances in noninvasive imaginghave reduced trainee exposure to in-vasive procedures. Thus today’s res-ident/trainee has limited opportu-nity to acquire the basic gatewayskills (eg, selective diagnostic cathe-ter angiography), upon which moreadvanced interventional skills arebased (1–3). Medical simulators en-gineered for interventional radiologytraining have the potential to ad-dress these gaps.Simulators introduce a novel capa-bility not only to train but also to es-tablish objective evidence of technicalcompetence during and after training(4–6). Although there is growing evi-dence for their effectiveness, few med-ical procedural simulations have dem-onstrated predictive validity. In otherwords, in very few instances has pro-ficiency with a medical proceduralsimulator been proved to transfer tothe clinical situation. This transfer oftrained skills to patients has now beenshown for simulations of laparoscopicsurgery, colonoscopy, and anesthesia(7–9), but at the time of writing, simi-lar evidence is still being sought forendovascular simulators.In the future, it is likely that simu-lations will be incorporated into certi-fication examinations for interven-tional radiology (6). Although it maybe intuitive that skills learned on sim-ulators should effectively transfer tothe clinical interventional radiologyenvironment, intuition is not evidence.A great deal of work on the develop-ment and validation of interventionalradiology procedural simulations mustbe completed before the inclusion ofsimulations on board, and other statu-tory, certification examinations can beendorsed. Ideally, the development andvalidation of the critical measures ofperformance (metrics) and test items tobe used in simulators should be ac-complished through a joint effort ofprofessional societies and the certify-ing bodies. Only in this way will weensure that the test instrument is com-patible with the educational curricu-lum and that the desired competenciesare being assessed. Input will be re-quired from psychologists and expertsin the subject matter, who will analyzeknowledgeandtaskperformance,break-ing them down into their key compo-nents (10,11). Metrics must be identifiedand used specifically for assessment ofthe learner. By design, this can be madeto occur automatically (by a simulator)within the context of a simulation. Thesubject experts involved in test develop-ment must be appointed with completetransparency by the certifying authori-ties. They must faithfully represent a ro-bust interventional radiology curricu-lum, with all facets of content, skill, andeven geographic diversity.Ideally a single, comprehensive setof procedures, skill sets, and metricsshould be defined and provided asopen source for incorporation into ac-ademic and commercial simulatormodels. Each set of metrics based onthe test items must in turn be vali-dated for its stated purpose. The di-versity of training environments rep-resented within and across radiologysocieties provides an excellent oppor-tunity for careful test validation. Theuse of trainees in transfer-of-trainingstudies will show whether skills ac-quired through simulation indeedtranslate into performance in patients.Subsequent review would demon-strate whether that performance ismaintained. Funding and support forthis work should derive jointly fromindustry, the various specialty societ-ies, and government agencies in theform of public-private partnerships.