Abstract

Purpose: This study sought to replicate a previous investigation of construct-irrelevant variance on the Association of Social Work Boards (ASWB) clinical licensing exam completed by Albright and Thyer over a decade ago. Method: The performance of ChatGPT was assessed on a modified version of 50 newly developed clinical exam questions currently distributed by the ASWB, where only the four multiple-choice options for each item were presented without the question. Results: ChatGPT achieved an average accuracy rate of 73.3% across three rounds of testing, providing strong evidence of construct-irrelevant variance. Discussion: These results raise concerns about the construct validity of the clinical exam and emphasize the need for reassessment of its structure and content to ensure fairness and accuracy. Based on the findings, state legislators and regulators are encouraged to temporarily discontinue the use of the ASWB exam in the clinical licensure process until its validity flaws are resolved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call