Abstract

In the post-epidemic era, as a good auxiliary tool the oral test app on the mobile learning terminal has taken a significant part in pre-class preparation, in-class task teaching, and after-class consolidation and expansion, especially in realizing the intelligence of the oral test and ensuring the fairness of the oral test. In this study, FACETS, a multi-facet Rasch model measurement program, was used to measure the consistency and the rater severity between the automatic scoring system of mobile intelligent apps and 5 expert raters' rating on 197 examinees' speaking records derived from a mock examination of the mobile terminal-assisted oral English mock test. The study found that: it has an enormous and decisive influence on the students' score distribution in spite of the significant difference in the severity of the mobile app automatic scoring and the expert scorer's scoring. The low bias rate of mobile intelligent apps automatic scoring imply that mobile app automatic scoring is more suitable and normative than human raters in the aspect of inner-consistency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call