Abstract

We present a multi-stream Dynamic Bayesian Network model with Articulatory Features (AF_AV_DBN) for audio visual speech recognition. Conditional probability distributions of the nodes are defined considering the asynchronies between the articulatory features (AFs). Speech recognition experiments are carried out on an audio visual connected digit database. Results show that comparing with the state synchronous DBN model (SS_DBN) and state asynchronous DBN model (SA_DBN), when the asynchrony constraint between the AFs is appropriately set, the AF_AV_DBN model gets the highest recognition rates, with average recognition rate improved to 89.38% from 87.02% of SS_DBN and 88.32% of SA_DBN. Moreover, the audio visual multi-stream AF_AV_DBN model greatly improves the robustness of the audio only AF_A_DBN model, for example, under the noise of −10dB, the recognition rate is improved from 20.75% to 76.24%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call