Abstract

Fluency is a common objective in English language learning and teaching. However, researchers have commented on the absence of a widely accepted definition of the construct and this sense of uncertainty may hinder efforts to measure fluency for purposes of research or assessment. To date, the extent to which rating instruments measure fluency independently from other areas of speech production such as complexity and accuracy has been under-explored. This is a significant gap because the literature broadly suggests that rater scores are susceptible to halo effects that have a distorting influence on the measurement of speaking skills and blur boundaries between assessment criteria. To investigate this issue, the current study examines a data set of scores assigned to 77 English language learners on two speaking tasks using an analytic rating scale featuring criteria for speech complexity, accuracy and fluency (CAF). The tasks were transcribed and analysed using measures of CAF. Rater scores were analysed using many-facet Rasch measurement and multiple regression. Results revealed that rated fluency was influenced by lexical complexity, indicating that fluency scores represented more than the fluency construct outlined in the analytic scale. Measures of speech speed, phonation time ration, length of utterance, lexical complexity, total speaking time and repair fluency explained the largest amount of variance in the fluency scores. Implications for research, language teaching and assessment are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call