Abstract

The widespread usage of computer-based assessments and individualized learning platforms has increased demand for the rapid production of high-quality items. Automated item generation (AIG), the process of using item models to generate new items with the help of computer technology, was proposed to reduce reliance on human subject experts. While AIG has been used in test development, recent advances in machine learning algorithms offer the potential to enhance its efficiency further. This paper presents an innovative approach utilizing OpenAI's latest transformer-based language model, GPT-3, to generate reading passages. Existing reading passages were used in carefully engineered prompts to ensure the AI-generated text has similar content and structure to a fourth-grade reading passage. Multiple passages were generated for each prompt, and the final passage was selected based on Lexile score agreement with the original passage. To ensure accuracy, a human editor conducted a simple revision of the chosen passage, correcting any grammatical and factual errors. To evaluate the effectiveness of the AI-generated passages, human judges assessed their coherence and appropriateness for fourth-grade readers. The results indicated that GPT-3-produced passages closely resembled human-authored passages regarding coherence, appropriateness, and readability for the target audience. By combining GPT-3's capabilities with carefully designed prompts and human editing, this study demonstrates an efficient and effective method for generating reading passages. The findings highlight the potential of incorporating large language models into automated item generation, contributing to improved scalability and quality in educational assessment development.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call