Abstract

The development and validation of a fully automatic computer-based test of spoken Spanish is described. The test takes between 15 and 20 min to complete and comprises readings, elicited imitations, opposites, short-answer questions, sentence constructions, and story retellings. The only response modality is spoken Spanish. Sets of vetted items were assembled into tests presented to over 1000 adult non-native Spanish learners and over 1000 native Spanish speakers from several countries. Expert human judgments of the non-native responses showed that the spoken response material carried sufficient information for highly reliable judgments of proficiency. In the development and validation process, over 60<th>000 responses were transcribed and over 21<th>000 human judgments were analyzed. The automatic scoring system is described. Its validation includes comparisons of machine scores with interactive human interviews and human ratings of speech recordings. Results suggest that machine scores have virtually the same information that is found in Oral Proficiency Interviews conducted by the U.S. Government raters or by official ACTFL interviews. Correlation between machine scores and combined interview scores (r=0.92) is higher than the interrater reliability of the professional interviewers themselves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call