Clinical education is a vital element of training for physical therapy (PT) students, providing opportunities to integrate theory learned in the classroom with the realities of clinical practice as well as for professional socialization and growth.2 However, the role of educator poses challenges for clinicians, and finding adequate capacity for students in clinical settings can be problematic; busy clinical caseloads with increasingly high-acuity patients and a focus on rapid discharge reduce the time clinicians have to spend with students, and in the private sector time taken away from direct patient care can also mean loss of income.2,3 Placement requirements such as formal student evaluations should not deter clinicians from taking on students by making excessive demands on their time; evaluation tools must be user friendly for educators, must measure student performance in a valid and reliable way, and must be applicable to client populations ranging from neonates to the frail elderly. The challenge of developing an appropriate tool for this purpose is not limited to the Canadian context; in Australia, for example, the Assessment of Physiotherapy Practice Instrument (APP) was recently developed to address similar issues to those experienced by Canadian clinicians: lack of time; the need for a competency-based, reliable, and valid tool; and the need for a format that is more universally used and accepted.4(p.5–56) Anderson and colleagues address three important elements of creating a new clinical evaluation tool for Canadian PT students: the rating scale, the format for the new tool, and training in the use of the tool.1 Their article is timely: development of a new Canadian evaluation tool is well underway, and a pilot study has just been completed. While the US-based Physical Therapist Clinical Performance Instrument (PT-CPI)5 has been considered the “gold standard” evaluation tool for students completing clinical placements in most Canadian PT programmes for more than a decade, clinical and academic educators' growing dissatisfaction with the PT-CPI has prompted the development of a new tool more closely aligned with Canadian needs. The three criteria examined by Anderson and colleagues are key areas of tool development. First, the format must be applicable for clinicians in a wide range of clinical settings, from tertiary critical-care centres in urban contexts to rural locations in both Canada and developing nations, where Internet access may be unreliable. Second, the rating scale needs to be clear, easily understood, and informative for both clinicians and students. I have observed that many clinical educators (CEs) are confused about the visual analogue scale (VAS) used in the CPI, ranking the majority of students as “near entry-level” for most competencies, even on junior placements. The scale's two anchors—“novice clinical performance” and “entry-level performance”—provide little guidance for clinicians;6 without adequate cues, CEs may instinctively mark a “passing” student at the far right of the scale, regardless of whether or not that “pass” represents entry-level performance. CEs, particularly those with years of experience, may also have a hard time identifying what basic entry-level performance actually is and may therefore rely to some degree on “gut feel,”7,8 compounding inaccurate or inconsistent performance measurement. Thirdly, training for use of evaluation tools must be easily accessible and address the competing demands on clinicians' time; a newer and possibly more acceptable version of the PT-CPI has not been implemented in Canada, partly because of the lengthy orientation requirement for CEs. One aspect of tool development not addressed by Anderson and colleagues is the student perspective. For students, clinical education courses are high stakes, not only because a failed placement can have serious consequences for their academic progression but because students often perceive clinical placements as “where real learning occurs.” Evaluation tools therefore need to be “educationally informative”9 as well as providing a valid rating of competence, and in most programmes the tool is used for formative feedback during placements as well as for summative feedback at the end. The recent pilot study of the draft of the new Canadian tool included both student and preceptor feedback, and it will be interesting to compare and contrast the differing perspectives on the tool from both clinicians' and students' perspectives.