Abstract
Sign language recognition plays a pivotal role in bridging communication gaps for the speech and hearing-impaired community. In this research, we present an innovative approach to enhance the accuracy and effectiveness of Indian sign language recognition through the integration of convolutional neural networks with spatial pyramid and global average pooling layers. The objective of this study is to address the intricate challenges by exploiting multi-scale feature representations and global context information. This architecture incorporates spatial pyramid layer to capture fine-grained spatial information across multiple scales. Additionally, global average pooling layer is introduced to consolidate context-aware features, further improving the model's discriminative power. Experiments are conducted using a standard dataset, encompassing a diverse range of gestures. The results demonstrate a significant improvement in recognition accuracy compared to conventional models. Incorporating spatial pyramid and global average pooling layers enables our model to effectively recognize the gestures, even in challenging scenarios with variations in lighting, complex background, and signer-dependent factors by achieving high accuracy. This research not only contributes to the advancement of ISL recognition technology but also holds promise for practical applications in real-world communication and assistive technologies.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Indian Journal of Computer Science and Engineering
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.