The present study examines the obstacles encountered by the deaf population, with a specific emphasis on the increasing significance of sign language in facilitating effective communication. The main mode of communication for deaf persons is Sign Language (SL), which is a visual and expressive method of communicating meaning via facial expressions, hand motions, and body gestures. The objective of this project is to automate the identification of sign language in order to improve accessibility and decrease reliance on interpreters. Specifically, the work focused on constructing an alphabet recognition system for Kurdish Sign Language (KSL). Due to its many intricacies and resemblances to the Arabic script, KSL requires a strong recognition model. The proposed method utilizes Convolutional Neural Networks (CNN) with an actual dataset to accurately recognize both numerical values and alphabetic characters in the Korean Sign Language (KSL). Real-time operation allows for rapid recognition of hand gestures and subsequent provision of textual output. The collection comprises 132,000 hand pictures, including 33 alphabetica and (0-9) numeral signs. Significantly, the use of MediaPipe, a method for processing 3D images, greatly improves the efficiency of detection. Multiple methodologies were investigated, and the integration of Convolutional Neural Networks (CNN), TensorFlow, and MediaPipe resulted in a remarkable accuracy of 99.87% with negligible dropout rates. This study establishes a foundation for enhanced communication and independence for the deaf population, representing a notable advancement in the automation of sign language recognition.
Read full abstract