Abstract

AbstractUnproctored assessments are widely used in pre‐employment assessment. However, widely accessible large language models (LLMs) pose challenges for unproctored personnel assessments, given that applicants may use them to artificially inflate their scores beyond their true abilities. This may be particularly concerning in cognitive ability tests, which are widely used and traditionally considered to be less fakeable by humans than personality tests. Thus, this study compares the performance of LLMs on two common types of cognitive tests: quantitative ability (number series completion) and verbal ability (use a passage of text to determine whether a statement is true). The tests investigated are used in real‐world, high‐stakes selection. We also examine the performance of the LLMs across different test formats (i.e., open‐ended vs. multiple choice). Further, we contrast the performance of two LLMs (Generative Pretrained Transformers, GPT‐3.5 and GPT‐4) across multiple prompt approaches and “temperature” settings (i.e., a parameter that determines the amount of randomness in the model's output). We found that the LLMs performed well on the verbal ability test but extremely poorly on the quantitative ability test, even when accounting for the test format. GPT‐4 outperformed GPT‐3.5 across both types of tests. Notably, although prompt approaches and temperature settings did affect LLM test performance, those effects were mostly minor relative to differences across tests and language models. We provide recommendations for securing pre‐employment testing against LLM influences. Additionally, we call for rigorous research investigating the prevalence of LLM usage in pre‐employment testing as well as on how LLM usage affects selection test validity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call