In this issue, Whelan et al. (1) address the question of who (standardized patients or physicians) should grade the performance of students taking Objective Structured Clinical Examinations (OSCEs). This is an interesting issue that has educational as well as socioeconomic dimensions. The last three decades of the 20th century were characterized by a significant shift in the way physician competence in Anglo-Saxon countries has been defined and assessed. The adoption of performance-based frameworks, such as the influential “Millers Pyramid” (2), placed more emphasis on what physicians could do rather than what they know. During the same period, the influence of medical educators with training in psychometrics led to a much greater emphasis on standardization, reliability, and validity in assessment. Together, the adoption of performance and psychometric discourses created a fertile ground for new assessment technologies such as the OSCE. No longer a novelty at the end of the first decade of the 21st century, OSCEs have been widely implemented by health professions around the world, including psychiatry (3). Mental health professionals need no convincing that one of the core competencies tested in an OSCE, or in any performance-based examination for that matter, is communication skills. However, whether communication skills are a unified construct is less clear. Our group (4) has reported that the appropriateness of specific communication skills (e.g., open-ended versus directed questioning) varies greatly according to the clinical problem encountered. For example, the often-taught communication style that gives priority to open-ended questions and listening is appropriate to a passive and withdrawn patient but entirely inadequate for an agitated manic patient. And although the first item of most communication scales is “makes eye contact,” we know that in some cultures direct eye contact is considered intrusive and uncomfortable. Therefore, to some degree, what is “appropriate” competence in a performance-based examination is a matter of perspective. So who is best positioned to assess the adequacy of student competence in a performance-based examination? Whelan et al. (1) follow the tradition of addressing this question from a psychometric perspective of “accuracy” (5–7), that is, the rater who is best able to reliably and consistently (psychometrically) score performances is considered the most appropriate examiner. This raises the interesting issue of what it actually means to be an examiner. At one extreme, we have interviewed individuals who argued, “There are no evaluators in the room, there are merely observers” (8). The implication is that the markers are “merely identifying behaviors that individuals perform and [that] it is the responsibility of the test administrators to compile those records into evaluations and numbers.” This idea arises from the often-made, but seldom-explicated, distinction between “assessment” and “evaluation” (9). Assessment is a process by which information is obtained relative to some known objective or goal. Assessment of skill attainment is rather straightforward. Either the skill exists at some acceptable level or it does not. Skills are readily demonstrable. Inherent in the idea of evaluation is “value.” When we evaluate, what we are doing is engaging in some process that is designed to provide information that will help us make a judgment about a given situation. When we evaluate, we are saying that the process will yield information regarding the worthiness, appropriateness, goodness, validity, legality, etc., of something for which a reliable measurement or assessment has been made. From this perspective, the “veracity” of the recording of a dispassionate and neutral observer is all that matters for “reliable assessment.” This view is congruent with a positivist conception that there is a reality/truth that can be Received September 8, 2008; accepted September 18, 2008. The authors are affiliated with the Department of Psychiatry at the Wilson Centre for Research in Education, University of Toronto, in Toronto, Ontario. Address correspondence to Brian David Hodges, M.D., Ph.D., Wilson Centre for Research in Education, Department of Psychiatry/University of Toronto, University Health Network-Toronto General Hospital, 200 Elizabeth St., 8 Easton, Rm 212, Toronto, Ontario M5G-2C4, Canada; brian. hodges@utoronto.ca (e-mail). Copyright © 2009 Academic Psychiatry