Abstract

Unable to communicate verbally is a disability. In order to exchange thoughts and interact, there exist severalways. The most predominant method involves use of hand-gestures. The prime motive of the proposed researchwork is to bridge the research gap in Sign Language Recognition with maximum efficiency. The goal is to replacethe human mediator with a machine to minimize human interference. This paper focuses on the recognition of ASLin real-time. In automatic sign language translator design the challenging part lies in selecting a good classifierto classify the static input gestures with high accuracy. CNN architecture is used to design a classifier for signlanguage recognition in the proposed system. The model and the pipeline architecture is developed by keras basedconvolutional neural network to classify 27 alphabets that is 26 English language alphabets and a unique character,space. With different parameter configurations, the system has trained the classifier with different parameters andtabulated the results. The proposed study achieved an efficiency of 99.88% on the test set. The result shows thatthe model accuracy improves as more data is fetched from various subjects for training.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call