Abstract

Hand sign language is a medium of communication for people with disabilities (deaf and speech impaired). However, in social practice, persons with disabilities may have to communicate with non-disable persons who do not understand sign language. These problems can be overcome with the help of translators or normal people learning sign language through existing media such as videos. Unfortunately, this method will probably cost a lot of money and time. In respons to this issue, the present study designed a sistem to detect hand gestures based on image processing. The method used is the You Only Look Once (YOLO) algorithm. The YOLO algorithm can detect and classify objects at once without being influenced by the light intensity and background of the object. This algorithm is a deep learning method that is more accurate than other deep learning methods. From this research, the system can detect and classify hand gestures with different backgrounds, light intensity, and distances with an accuracy rate above 90%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call