Abstract

This report describes the initial automated scoring results that were obtained using the constructed responses from the Writing and Speaking sections of the pilot forms of theTOEFLJunior®Comprehensive test administered in late 2011. For all of the items except one (the edit item in the Writing section), existing automated scoring capabilities were used with only minor modifications to obtain a baseline benchmark for automated scoring performance on theTOEFLJunior task types; for the edit item in the Writing section, a new automated scoring capability based on string matching was developed. A generic scoring model from thee‐rater®automated essay scoring engine was used to score the email, opinion, and listen‐write items in the Writing section, and the form‐level results based on the five responses in the Writing section from each test taker showed a human–machine correlation ofr= .83 (compared to a human–human correlation ofr= .90). For scoring the Speaking section, new automated speech recognition models were first trained, and then item‐specific scoring models were built for the read‐aloud picture narration, and listen‐speak items using preexisting features from theSpeechRaterSMautomated speech scoring engine (with the addition of a new content feature for the listen‐speak items). The form‐level results based on the five items in the Speaking section from each test taker showed a human–machine correlation ofr= .81 (compared to a human–human correlation ofr= .89).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call