Abstract
AbstractBackgroundResearchers have used within‐subjects designs to assess personality faking in real‐world contexts. However, no research is available to (a) characterize the typical finding from these studies and (b) examine variability across study results.AimsThe current study was aimed at filling these gaps by meta‐analyzing actual applicants’ responses to personality measurements in high‐stakes contexts versus low‐stakes contexts reported in within‐subjects studies.Materials & MethodsThis meta‐analysis examined 20 within‐subjects applicant–honest studies (where individuals completed an assessment once as applicants and again in a low‐stakes setting).ResultsWe found that applicants had moderately higher (more socially desirable) means, slightly reduced variability, and stronger rank‐order consistency in high‐stakes settings. The assessment order moderated the findings; studies with a high‐to‐low order (where the high‐stakes setting was introduced first) showed a stronger faking effect—demonstrated by higher means and weaker rank‐order consistencies—than those in a low‐to‐high order.Discussion and ConclusionThese findings are consistent with expectations that, relative to low‐stakes situations, individuals tend to exaggerate, in a positive direction, their personality descriptions as job applicants. In addition, assessment order matters when understanding the magnitudes of faking effects.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.