The study reported here investigates the reliability and validity of a standardized evaluation form used to assess students' knowledge, clinical skills, interpersonal skills, and professionalism during fourth-year clinical rotations in a distributed model of veterinary education. A form designed to assess veterinary knowledge (5 items), clinical skills (7 items), interpersonal skills (3 items), and professionalism (6 items) was used by clinical preceptors to evaluate student performance across different rotations. For the period January--May 2007, 218 evaluations were completed for 81 students; each student was assessed in at least two rotations. Mean scores across the 21 items ranged from 3.42 (SD = 0.61) to 3.87 (SD = 0.37). Construct validity was assessed using exploratory factor analysis. The 21 items loaded on three underlying factors, professionalism, knowledge and clinical skills, accounted for 70.35% of the variance. Internal consistency (Cronbach's alpha) of each subscale was high, ranging from 0.88 for clinical skills to 0.94 for professionalism and 0.96 for the entire tool. Correlations between subscales were significant (p < 0.01), ranging from r = 0.62 to r = 0.76. Preliminary analysis suggests that the evaluation tool has good internal reliability. Construct validity analysis suggests that certain items relating to interpersonal skills and clinical skills were assessing either knowledge or professionalism. Clinical preceptors could differentiate between different skill levels for knowledge and clinical skills. Challenges associated with the assessment of professionalism are discussed.