Abstract

Acoustic characteristics and articulatory movements are known to vary with speaking rates. This study investigates the role of speaking rate on acoustic-to-articulatory inversion (AAI) performance using deep neural networks (DNNs). Since fast speaking rate causes fast articulatory motion as well as changes in spectro-temporal characteristics of the speech signal, the articulatory-acoustic map in a fast speaking rate could be different from that in a slow speaking rate. We examine how these differences alter the accuracy with which different articulatory positions could be recovered from the acoustics. AAI experiments are performed in both matched and mismatched train-test conditions using data of five subjects, in three different rates – normal, fast and slow (fast and slow rates are at least 1.3 times faster and slower than the normal rate). Experiments in matched cases reveal that, the errors in estimating vertical motion of sensors on the tongue articulators from acoustics with fast speaking rate, is significantly higher than those with slow speaking rate. Experiments in mis-matched conditions reveal that there is consistent drop in AAI performance compared to the matched condition. Further experiments performed by training AAI with acoustic-articulatory data pooled from different speaking rates reveal that a single DNN based AAI model is capable of learning multiple rate-specific mapping.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call