Abstract

Alphabet recognition is needed in many applications for retrieving information associated with the spelling of a name, such as telephone numbers, addresses, etc. This is a difficult recognition task due to the acoustic similarities existing between letters in the alphabet (e.g., the E-set letters). This paper presents the development of a high-performance alphabet recognizer that has been evaluated on studio quality as well as on telephone-bandwidth speech. Unlike previously proposed systems, the alphabet recognizer presented is based on context-dependent phoneme hidden Markov models (HMMs), which have been found to outperform whole-word models by as much as 8%. The proposed recognizer incorporates a series of new approaches to tackle the problems associated with the confusions occurring between the stop consonants in the E-set and the confusions between the nasals (i.e., letters M and N). First, a new feature representation is proposed for improved stop consonant discrimination, and second, two subspace approaches are proposed for improved nasal discrimination. The subspace approach was found to yield a 45% error-rate reduction in nasal discrimination. Various other techniques are also proposed, yielding a 97.3% speaker-independent performance on alphabet recognition and 95% speaker-independent performance on E-set recognition, A telephone alphabet recognizer was also developed using context-dependent HMMs. When tested on the recognition of 300 last names (which are contained in a list of 50,000 common last names) spelled by 300 speakers, the recognizer achieved 91.7% correct letter recognition with 1.1% letter insertions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.