Abstract

This study presents a modified version of the Korean Elicited Imitation (EI) test, designed to resemble natural spoken language, and validates its reliability as a measure of proficiency. The study assesses the correlation between average test scores and Test of Proficiency in Korean (TOPIK) levels, examining score distributions among beginner, intermediate, and advanced learner groups. Using item response theory (IRT), the study explores the influence of four key facets—learners, items, raters, and constructs—on performance evaluation. An explanatory item response modeling (EIRM) analysis identified linguistic factors impacting the EI test’s performance. Notably, the study uncovered a robust positive correlation between EI test scores and TOPIK levels. Significant score disparities were observed between beginner and intermediate, as well as beginner and advanced, learner cohorts. An IRT-based exploration of each facet revealed that item difficulty was comparatively lower in contrast to learners’ commendable performance, and raters exhibited a high degree of scoring consistency. The EIRM analysis underscores the significance of variables such as the number of syllables, vocabulary score, and content word density in influencing test performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call