Abstract

Physical ability tests have undergone intense scrutiny in the courts since the 1970s. A recent survey of court-disputed police and fire physical ability tests showed a successful defense rate of less than 10%.1 Faced with such odds, public sector agencies have focused on the development, validation, and use of physical ability tests. A physical ability test supported by a thorough validity study but poorly used, is just as likely to lose in court as a test poorly developed and validated. Numerous researchers have thoroughly examined performance differences between men and women on physical ability tests.2,3 Since job-related physical ability tests are likely to reflect such differences, setting pass/fail cutoffs that accurately reflect the physical ability levels required for successful job performance is a key consideration for any protective service agency involved in physical ability testing. A variety of practices are followed by public sector agencies for using physical ability test scores: pass/fail cutoffs, top-down ranking, banding or grouping passing applicants, and weighting or combining the physical ability test results with other pre-employment tests. This article will limit discussion to evaluating the use of physical ability test scores outside of other selection devices, although the principles herein may be used for combining physical ability test scores with other pre-employment tests.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call