Abstract

AbstractWe examined the occurrence of faking on a rating situational judgment test (SJT) by comparing SJT scores and response styles of the same individuals across two naturally occurring situations. An SJT for medical school selection was administered twice to the same group of applicants (N = 317) under low‐stakes (T1) and high‐stakes (T2) circumstances. The SJT was scored using three different methods that were differentially affected by response tendencies. Applicants used significantly more extreme responding on T2 than T1. Faking (higher SJT score on T2) was only observed for scoring methods that controlled for response tendencies. Scoring methods that do not control for response tendencies introduce systematic error into the SJT score, which may lead to inaccurate conclusions about the existence of faking.

Highlights

  • The predictive validity evidence on situational judgment tests (SJTs) in personnel selection stimulated the introduction of SJTs in educa‐ tional selection settings

  • In contrast to previous research, this study examined faking on an SJT that uses a rating re‐ sponse format, enabling the examination of faking through extreme responding

  • More extreme respond‐ ing relates to a T1–T2 increase in the SJT score for the scoring methods that controlled for response tendencies

Read more

Summary

Results

Due to the higher susceptibility to faking, behavioral tendency instructions are of limited practical value in high‐stakes medical school selection and examining faking effects on SJTs using these instructions would, have little ecological validity. Prior research has demonstrated that these scoring methods may be affected by response tendencies (e.g., extreme response style), introducing a source of systematic error, which may decrease the criterion‐related validity of an SJT (McDaniel et al, 2011; Weng, Yang, Lievens, & McDaniel, 2018). We examined how faking (i.e., higher SJT score in a high‐stakes setting than in a low‐stakes setting) is influenced by three different scoring methods that are differentially affected by. Hypothesis 2b More extreme responding is related to a larger score difference between low‐stakes and high‐stakes settings for a scoring method that is more strongly affected by response ten‐ dencies (: a scoring method that does not control for response tendencies). We expect that the systematic error introduced by response tendencies will lower the construct validity of scoring methods not controlling for response tendencies

| METHODS
| Participants
| DISCUSSION
| Limitations

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.