Abstract

A 15-min computer-based test of spoken Spanish was designed to measure candidate proficiency in Spanish. The test presents seven tasks: reading, elicited imitation, word opposites, short-answer questions, sentence constructions, opinion questions, and story retellings. The tests were presented to 579 adult non-native Spanish learners and to 552 native Spanish speakers. Expert human judgments of the non-native responses showed that the spoken response material carried sufficient information for highly reliable judgments of proficiency. In the development and validation process, 57<th>000 responses were transcribed and 21<th>000 human judgments were analyzed. The paper describes the validation of the automatic scoring system with reference to concurrent oral proficiency interviews conducted by professional raters certified by the US Government or by ACTFL. The outcomes of the comparisons of the machine scored tests with interactive human interviews and with human ratings from recorded speech indicate that the test produces scores that have virtually the same information that is found in oral proficiency interviews. Almost all assessments correlate highly with the other assessments with coefficients in the range 0.86–0.96. The test correlation with the combined interview scores (r=0.92) is higher than the inter-rater reliability of the professional interviewers themselves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call