Abstract
ObjectiveThis study aimed to develop a prompt engineering procedure for test question mapping and then determine the effectiveness of test question mapping using Chat Generative Pre-Trained Transformer (ChatGPT) compared to human faculty mapping. MethodsWe conducted a cross-sectional study to compare ChatGPT and human mapping using a sample of 139 test questions from modules within the Integrated Pharmacotherapeutics course series. The test questions were mapped by 3 faculty members to both module objectives and the Accreditation Council for Pharmacy Education Standards 2016 (Standards 2016) to create the “correct answer”. Prompt engineering procedures were created to facilitate mapping with ChatGPT, and ChatGPT mapping results were compared with human mapping. ResultsChatGPT mapped test questions directly to the “correct answer” based on human consensus in 68.0% of cases, and the program matched with at least one individual human response in another 20.1% of cases for a total of 88.1% agreement with human mappers. When humans fully agreed with the mapping decision, ChatGPT was more likely to map correctly. ConclusionThis study presents a practical use case with prompt engineering tailored for college assessment or curriculum committees to facilitate efficient test questions and educational outcomes mapping.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.