Abstract
This study explores vocabulary assessment practices in Saudi Arabia's hybrid EFL ecosystem, leveraging platforms like Blackboard and Google Forms. The focus is on identifying prevalent test formats and evaluating their alignment with modern pedagogical goals. To classify vocabulary assessment formats in hybridized EFL contexts and recommend the integration of AI-enhanced adaptive testing to improve assessment effectiveness and learner outcomes. A mixed-methods approach was employed, including analysis of 161 online test samples and semi-structured interviews with test designers. A taxonomy was developed to classify tests into multiple-choice and open-response paradigms, assessing their cognitive demands and contextual usage. Results highlighted the predominance of multiple-choice and cloze tasks, emphasizing recognition over retrieval. Digital platforms enabled test administration, but adaptive, AI-driven assessments were notably absent. The findings advocate for integrating AI technologies in vocabulary assessment to create adaptive and personalized evaluations.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have