Abstract
Abstract: Sign language recognition (SLR) has arisen as a major area of research in recent years, attempting to bridge the communication gap between the deaf and hard-of-hearing community and the hearing world. This research study addresses the construction and implementation of a manual alphabet recognition system utilising deep learning techniques, notably convolutional neural networks (CNNs). The work focuses on establishing an efficient and accurate system for converting Nigerian Sign Language manual alphabets into text. By integrating computer vision and machine learning methods, the proposed system seeks to overcome the communication gap between deaf and hearing individuals. The paper explains the technique adopted, including data collection, preprocessing, model architecture, and deployment using web-based tools. The system achieves a 95% success rate in recognizing static hand motions, proving its potential for real-world applications. However, issues in identifying dynamic motions and generalizing across varied user populations are observed. The report finishes with recommendations for future research, emphasizing the need for combining temporal analysis and expanding the system's capabilities to word and phrase recognition.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have