Abstract

Deaf people cause inconvenience to their daily communication due to different degrees of hearing loss, and their hearing loss may also affect their ability to recognize facial emotions, thereby further affecting their interpersonal communication. Therefore, the use of emerging information technology to solve the hearing impairments of deaf and dumb people, language barriers of deaf and dumb people, regional language barriers, and emotional barriers of deaf and dumb people contributes to the smooth communication between deaf and dumb people and hearing people. And improving the education system for all has important practical significance. Based on this research background, the paper uses eye movement and event-related potential analysis methods to try to study the characteristics of deaf people’s potential cognitive abilities in face emotion recognition and sign language recognition tasks from different experimental paradigms, extract the target area by means of image processing, and Use appropriate feature description, sign language video recognition technology, emotional speech synthesis technology, speech recognition technology and machine learning methods to make further accurate judgments, and finally perform sign language and emotion recognition. Finally, the APP software was tested for running. Through experiments on 630 gesture images, the recognition rate reached 94.22% and the speed reached 0. 29s/frame. The results show that the proposed algorithm effectively improves the recognition rate and can meet the realtime performance of dumb language communication.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call