Abstract

Abstract In this paper, we focus on the need for measuring the intelligibility of English pronunciation using an automated speech system, and the system proceeded in this feasibility study was tested with 18 speakers coming from six countries representing six types of English (China, Vietnam, Egypt, India, South Africa and the Philippines). Those test candidates were selected carefully to stand for a series of intelligibility, two different approaches, transcription and nonsense, were utilized to test and measure their intelligibility. An automated computer pattern developed for speaking proficiency scoring based on suprasegmental measures was set to predict intelligibility scores. The Pearson’s correlation was 0.743 for transcription construct and 0.819 for the nonsense construct between the human assessed and computer predicted scores. The reliable inter-rater Cronbach’s alpha for the transcription scores was 0.943 and 0.945 for the nonsense intelligibility scores. Basing on the type of intelligibility measure, the computer utilized different suprasegmental measures to predict the score. The computer used 11 measures for the nonsense intelligibility score and eight for the transcription score. Only two features were common to both constructs. These analyses and results of this computer experimental pattern can provide researchers of L2 different perspectives of measuring intelligibility in research afterwards.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call