Abstract

<p>Assessing speaking skills is regarded as a complex and hard process compared with the other language skills. Considering the idiosyncratic characteristics of EFL learners, oral proficiency assessment issue becomes even more important. Keeping this situation in mind, judgements and reliability of raters need to be consistent with each other. This study aims to compare native and non-native English language teachers’ evaluation of EFL learners’ speaking skills. Based on the oral proficiency scores in the final exam conducted at a state university in Turkey, the study analysed the scores given by native and non-native English language teachers to 80 EFL students attending preparatory classes in the 2014-2015 academic year. 3 native and 3 non-native English language teachers participated in the study. Data were collected through an analytic rating scale and analysed with the help of <em>independent samples t-test</em> and <em>Pearson product-moment correlation test</em>. Pearson product-moment correlation test (calculated as 0,763) indicated that the raters had high inter-rater reliability coefficients. T-test results revealed that there is no statistically significant difference in the total scores given by both groups of teachers. The study also investigated the different components of speaking skills such as fluency, pronunciation, accuracy, vocabulary, and communication strategies with regard to the existence of significant difference between the scores. The only component which created a statistically significant difference was found to be pronunciation, which was expected prior to the research. In line with the overall findings of the study, it can be concluded that native and non-native English language teachers display almost identical rating behaviour in assessing EFL students’ oral proficiency.</p>

Highlights

  • Assessing language skills constitutes a substantial place in the language learning process since learners somehow need to authenticate what they have gained so far

  • Based on the oral proficiency scores in the final exam conducted at a state university in Turkey, the study analysed the scores given by native and non-native English language teachers to 80 EFL students attending preparatory classes in the 2014-2015 academic year. 3 native and 3 non-native English language teachers participated in the study

  • Findings with regard to the first research question reveal that NESTs and NNESTs have acceptable inter-rater reliability since total correlation coefficient was calculated as 0,763 which means that there is a positive correlation between the raters

Read more

Summary

Introduction

Assessing language skills constitutes a substantial place in the language learning process since learners somehow need to authenticate what they have gained so far. Assessment of speaking skill has always been a thorny component of language testing as it is not easy to specify the traits of oral proficiency and determine reliable, valid, and practical methods to assess the related traits (Brown, 2001) For this reason, it needs a special attention in order to provide a reliable and valid speaking assessment together with the raters’ internal consistency. It needs a special attention in order to provide a reliable and valid speaking assessment together with the raters’ internal consistency As it is known, speaking tests are conducted by experienced native English speaking language teachers ( ‘NESTs’) throughout the world. This fact brings about the inclusion of non-native English speaking language teachers

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call