This paper presents innovative approaches to various aspects of sign language recognition and learning, catering to the diverse needs within this field. It introduces a system utilizing 2D image sampling and concatenation, trained with convolutional neural networks, achieving accurate and robust sign recognition even with low-cost cameras. Another system, SignQuiz, offers a cost-effective web-based solution for learning finger-spelled signs in Indian Sign Language (ISL), outperforming traditional printed mediums. Additionally, a dynamic hand gesture recognition system employing deep learning architectures demonstrates superior performance over existing methods, addressing challenges such as hand segmentation and sequence modeling. Furthermore, an efficient deep convolutional neural network approach for hand gesture recognition, including transfer learning, achieves high recognition rates across various datasets. Finally, a robust model for static sign recognition in ISL, utilizing CNNs and evaluating on a diverse dataset, attains exceptional accuracy and outperforms previous works. These contributions collectively advance the field of sign language recognition and learning, offering solutions that are accurate, cost-effective, and efficient, thereby facilitating better communication and interaction for the hearing-impaired community. Key Words: CNN, Sign language recognition, Hand gesture recognition, Machine learning, Transfer Learning, SVM.
Read full abstract