Abstract

Within the communication process of human beings, the speaker’s facial expression and lip-shape movement contains extremely rich language information. The hearing impaired, aside from using residual listening to communicate with other people, can also use lip reading as a communication tool. As the hearing impaired learn the lip reading using a computer-assisted lip reading system, they can freely learn lip reading without the constraints of time, place or situation. Therefore, we propose a computer-assisted lip reading system (CALRS) for phonetic pronunciation recognition of the correct lip-shape with an image processing method, object-oriented language and neuro-network. This system can accurately compare the lip-image of Mandarin phonetic pronunciation using Self-Organizing Map Neuro-Network (SOMNN) and extension theory to help hearing impaired correct their pronunciation.Keywordshearing impairedSelf-Organizing Map Neuro-Networkextensionlip reading reorganization

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.