Abstract

This paper investigates the use of multi-distribution deep neural networks (MD-DNNs) for automatic intonation classification in second-language (L2) English speech. If a classified intonation is different from the target one, we consider that mispronunciation is detected and appropriate diagnostic feedback can be provided thereafter. To transcribe speech data for intonation classification, we propose the RULF labels which are used to transcribe an intonation as rising, upper, lower or falling. These four types of labels can be further merged into two groups – rising and falling. Based on the annotated data from 100 Mandarin and 100 Cantonese learners, we develop an intonation classifier, which considers only 8 frames (i.e., 80 ms) of pitch value prior to the end of the pitch contour over an intonational phrase (IP). This classifier determines the intonation of L2 English speech as either rising or falling with an accuracy of 93.0%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call