Abstract

Sign language is a major means of communication for people with hearing disabilities. However, there are very few hearing people who have learned sign language, and this is a great barrier to communication between hearing-impaired and hearing people. While automatic speech interpretation has already been put to practical use in some fields, there remains a lot of difficulties in putting sign language interpretation into practical use. Considering the variety of sign languages, the complexity of their motions and many subtle differences, it seems that there is bound to be a limit to any method of artificially extracting feature elements from motions and inputting them to a classifier to decide sign language motions. The authors are now investigating a method for automatically extracting feature elements using a pre-trained network model that has been trained by deep learning, and for classifying each motion. The problem of using deep learning is that it requires a large amount of training data to create a trained model. The acquisition of sign language motion data in order to satisfy this requirement seems to be difficult in practice. This paper presents a method of artificially creating data by data augmentation under conditions where the number of data items that can be collected is limited, and improving classification accuracy. It is shown that the proposed method has obtained a result of about 10% improvement in classification accuracy. In addition, the application of ensemble learning, which is another techniques for improving accuracy, is also described. The author shows that classification performance after integrating the results using a plurality of trained models built on the feature elements obtained from each pre-trained network model, resulted in a significant improvement in accuracy of more than about 10%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.