Abstract

Measurement of language learners’ development in speaking proficiency is important for practicing language teachers, not only for assessment purposes, but also for evaluating the effectiveness of materials and approaches used. However, doing so effectively and efficiently presents challenges. Commercial speaking tests are often costly, and beyond the budget of many schools, and the use of frameworks such as Complexity, Accuracy, Lexis, and Fluency place great demand on teachers’ limited time. This article reports on two tests that potentially offer a practical solution to these problems. The speaking proficiency of 75 students in an oral English course at a university in Japan was measured three times over the course of an academic year using short, spoken narratives assessed by human raters with a specially-designed rubric, and a completely automated computer-based test. The many-facet Rasch measurement model was used to analyse the human raters and the rubric, and to provide scores for the subsequent analyses. Mixed-level modeling was used to model growth in speaking proficiency over the academic year, and correlation was used to assess the relationship between the two test types. The results show that both tests were able to measure growth in speaking proficiency, but only in the first half of the year. There was moderate correlation between concurrent administrations of the two tests, suggesting they are measuring the same construct. Results of the narrative test suggest that the rubric was reliable and effective in measuring speaking proficiency in this context, and also provide strong evidence supporting the use of FACETS when human raters are used to evaluate speaking.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call