We compared results of written assessment of intensive care trainees' procedural skills with results obtained from one of two live assessment formats for the purposes of assessing the concurrent validity of the different test methods. Forty-five Australasian senior trainees in intensive care medicine completed a written test relating to a procedural skill, as well as either a simulation format or oral viva assessment on the same procedural skill. We analysed correlation between written exam results and results obtained from simulation format or oral viva assessment. For those who completed the simulation format examination, we also maintained a narrative of actions and identified critical errors. There was limited correlation between written exam results and live (simulation or viva) procedure station results (r = 0.31). Correlation with written exam results was very low for simulation format assessments (r = 0.08) but moderate for oral viva format assessment (r = 0.58). Participants who passed a written exam based on management of a blocked tracheostomy scenario performed a number of dangerous errors when managing a simulated patient in that scenario. The lack of correlation between exam formats supports multi-modal assessment, as currently it is not known which format best represents workplace performance. Correlation between written and oral viva results may indicate redundancy between those test formats, whereas limited correlation between simulation and written exams may support the use of both formats as part of an integrated assessment strategy. We hypothesise that identification of critical candidate errors in a simulation format exam that were not exposed in a written exam may indicate better predictive validity for simulation format examination of procedural skills.