Abstract This study explores the field of sign language recognition through machine learning, focusing on the development and comparative evaluation of various algorithms designed to interpret sign language. With the prevalence of hearing impairment affecting millions globally, efficient sign language recognition systems are increasingly critical for enhancing communication for the deaf and hard-of-hearing community. We review several studies, showcasing algorithms with accuracies ranging from 63.5 to 99.6%. Building on these works, we introduce a novel algorithm that has been rigorously tested and has demonstrated a perfect accuracy of 99.7%. Our proposed algorithm utilizes a sophisticated convolutional neural network architecture that outperforms existing models. This work details the methodology of the proposed system, which includes preprocessing, feature extraction, and a multi-layered CNN approach. The remarkable performance of our algorithm sets a new benchmark in the field and suggests significant potential for real-world application in assistive technologies. We conclude by discussing the impact of these findings and propose directions for future research to further improve the accessibility and effectiveness of sign language recognition systems.
Read full abstract