Although our traditional baccalaureate nursing program had been in existence for a number of years and graduates had been successful as RNs in the local community, faculty began receiving reports that clinical staff and employers were concerned about the ability of nursing students to accurately assess and interpret vital signs. Noting that many students tended to rely on electronic blood pressure machines, faculty explored the issue with clinicians in local hospitals. Even experienced nurses rely heavily on technology to report vital signs. Educators made the decision to use simulation to review clinical learning outcomes among sophomore nursing students in the assessment and interpretation of vital signs. This article describes how our program assessment, undertaken in response to an identified weakness, has led to change in our nursing program and dramatic improvement in students' skills. The public views nursing as one of the most trusted professions. As educators, we owe it to the public, and to our profession, to ensure that graduates are clinically competent. USING SIMULATION TO EVALUATE LEARNING OUTCOMES While simulation cannot replace real patient care experiences, it serves as a valuable adjunct for an interactive, formative learning experience (Feingold, Calaluce, & Kallen, 2004; McCausland, Curran, & Cataldi, 2004). However, evidence of the usefulness of simulation for evaluation has been sparse. Much of the nursing education literature has explored issues of self-efficacy, confidence, and learner perceptions of simulation, but research is lacking on the use of simulation for the evaluation of learning outcomes (Kaakinen & Arwood, 2009; Nehring & Lashley, 2004; Radhakrishnan, Roche, & Cunningham, 2007). McCausland and colleagues (2004) explored the for a reduction in errors when they used high-fidelity simulation in a heart failure scenario. Seventy-two nursing students reported that participation in the simulation was a valuable learning experience, rated the simulated experience as having a high level of realism, and perceived that the simulation provided knowledge that would transfer to real clinical practice; 93 percent reported that simulation could be useful to prevent errors in their practice. Radhakrishnan and colleagues (2007) also reported safety improvements because of simulation. A systematic analysis of nursing simulation literature for the years 2000-2007 by Kaakinen and Arwood (2009) showed that many nurse educators were using simulation to teach rather than to evaluate. A few studies, however, have examined the use of simulations for evaluation. Lisko and O'Dell (2010) used Kolb's theory of experiential learning to evaluate how well learners integrated critical thinking and laboratory skill performance. They reported that students' evaluations frequently mentioned that scenario-based simulations helped them integrate theoretical knowledge with psychomotor skills used in the lab. Lasater (2007) conducted an exploratory study that provides support for simulation as an evaluation tool. Rather than evaluating psychomotor measurements to infer student understanding, she developed a clinical assessment rubric aimed at measuring what students need to know. Decker, Sportsman, Puetz, and Billings (2008) state that potential use of simulation in competency testing cannot be achieved until educators and researchers acquire the knowledge and skills needed to use this education strategy, develop realistic case scenarios, and design and validate standardized and reliable testing methods (p. 74). MEASURING CLINICAL LEARNING OUTCOMES To accomplish the task of measuring clinical learning outcomes in our program, faculty collaborated with simulation staff to develop a competency tool and scenario that objectively measures learners' ability to assess and interpret vital signs. Faculty defined competency as the ability to accurately take vital signs on the high-fidelity manikin and to interpret the meaning of those vital signs within the context of a clinical scenario. …