Abstract
Sign language is a visual-gestural language used by hearing impaired person, they modality the gesture to convey meaning. The main problem with sign language communication is ordinary people do not understand the sign language. Therefore, sign language is one of the challenging problems in machine learning. In this paper, researchers focus on visual-based methods and optimize the data preprocessing apply with existing sign language resources. Researchers propose an innovative technique for video processing called Sequenced Edge Grid Images (SEGI) for sign language recognition to interpret hand gesture, body movement, and facial expression. Researchers collected several of sign language data from the internet, the data including Thai sign language utilize in everyday life. The proposed technique was implemented with a convolutional neural network (CNN). The experiments showed SEGI with CNN has increases test accuracy rate with approximately 11% compared to static hand gesture images. Finally, researchers discovered a CNN structure suitable for dataset and examination data by transferring a pre-trained CNN. The fine-tuning with SEGI technique improved 99.8%, thus highest among all the methods. From the results data-preprocesses technique of dataset generation and deep transfer learning was an effective way to improve the accuracy of sign language recognition.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.