Abstract

This study presents a modified version of the Korean Elicited Imitation (EI) test, designed to resemble natural spoken language, and validates its reliability as a measure of proficiency. The study assesses the correlation between average test scores and Test of Proficiency in Korean (TOPIK) levels, examining score distributions among beginner, intermediate, and advanced learner groups. Using item response theory (IRT), the study explores the influence of four key facets—learners, items, raters, and constructs—on performance evaluation. An explanatory item response modeling (EIRM) analysis identified linguistic factors impacting the EI test’s performance. Notably, the study uncovered a robust positive correlation between EI test scores and TOPIK levels. Significant score disparities were observed between beginner and intermediate, as well as beginner and advanced, learner cohorts. An IRT-based exploration of each facet revealed that item difficulty was comparatively lower in contrast to learners’ commendable performance, and raters exhibited a high degree of scoring consistency. The EIRM analysis underscores the significance of variables such as the number of syllables, vocabulary score, and content word density in influencing test performance.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.