Abstract
A large portion of the hearing and speech impaired are deprived of any education and just around 1-2% of the hearing and speech impaired get schooling in Indian sign language. The large portion of them is incapable of using theoretical languages, having difficult issues to express themselves or understanding printed texts. So, there is need to build an auto- mated translation system with innovations for helping hearing and speech impaired people in communicating among them. Sign language recognition is an evolving research area in computer vision and machine learning. In this paper, recognition of Indian sign language alphabets will be carried out using deep learning and machine learning. A large database of images of sign language alphabets will be captured through web camera. The network is train by providing input and expected outputs. The project will explore the various deep learning algorithms such as AlexNet and GoogLeNet will be used for training the images. The trained model using various architectures will be used for testing the other Indian sign language alphabets. The comparative study of accuracy of various architecture will be carried out.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.