Abstract
This paper presents a method for describing the effect of articulatory trajectories on phoneme recognition. The proposed method comprises three stages. The first stage embeds three multilayer neural networks (MLNs): MLN (LF-DPF) that maps acoustic features or local features (LFs) onto articulatory features or distinctive phonetic features (DPFs), MLN (cntxt) that reduces misclassifications at phoneme boundaries, and MLN (Dyn) that controls dynamics of DPF features. The second stage incorporates an inhibition/enhancement (In/En) network by varying the trajectories of articulators to achieve categorical DPF movement by enhancing DPF peak patterns and inhibiting DPF dip patterns. The third stage decor relates continuous DPF vectors using the Gram-Schmidt algorithm before feeding into a hidden Markov model (HMM)-based classifier. In the experiments on Japanese Newspaper Article Sentences (JNAS) database, the proposed feature extractor shows the performance of phoneme recognition with the variation of different trajectories.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.