Abstract

As a testing method, the efficacy of situational judgment tests (SJTs) is a function of a number of design features. One such design feature is the response format. However, despite the considerable interest in SJT design features, there is little guidance in the extant literature as to which response format is superior or the conditions under which one might be preferable to others. Using an integrity-based SJT measure administered to 31,194 job applicants, we present a comparative evaluation of 3 response formats (rate, rank, and most/least) in terms of construct-related validity, subgroup differences, and score reliability. The results indicate that the rate-SJT displayed stronger correlations with the hypothesized personality traits; weaker correlations with general mental ability and, consequently, lower levels of subgroup differences; and higher levels of internal consistency reliability. A follow-up study with 492 college students (Study 2; details of which are presented in the online supplemental materials) also indicates that the rate response format displayed higher levels of internal consistency and retest reliability as well as favorable reactions from test takers. However, it displayed the strongest relationships with a measure of response distortion, suggesting that it is more susceptible to this threat. Although there were a few exceptions, the rank and most/least response formats were generally quite similar in terms of several of the study outcomes. The results suggest that in the context of SJTs designed to measure noncognitive constructs, the rate response format appears to be the superior, preferred response format, with its main drawback being that it is susceptible to response distortion, although not any more so than the rank response format.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call