Abstract

The use of sign language is an important means of communication for individuals with hearing and speech impairments, but communication barriers can still arise due to differences in grammatical rules across different sign languages. In an effort to address these barriers, this study aimed to develop a real-time two-way communication device that uses image processing and recognition systems to translate two-handed Filipino Sign Language (FSL) gestures and facial expressions into speech; the system can recognize gestures that correspond to specific words and phrases. Specifically, the researchers utilized Convolutional Neural Networks (CNNs) to enhance the processing speed and accuracy of the device. The system also includes a speech-to-text (STT) feature that helps non-signers communicate with deaf individuals without relying on an interpreter. The study's results showed that the device achieved a 93% accuracy rate in recognizing facial expressions and FSL gestures using CNN, indicating that it is highly accurate. Additionally, the system performed in real-time, with an overall average conversion time of 1.84 and 2.74 seconds for sign language to speech and speech to text, respectively. Finally, the device was well-received by both signers and non-signers, with a total approval rating of 85.50% from participants at Manila High School, suggesting that it effectively facilitates two-way communication and has the potential to break down communication barriers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call