Abstract

Developing deep neural models for continuous recognition of sign gestures and generation of sign videos from spoken sentences is still challenging and requires much investigation in earlier studies. Although the recent approaches provide plausible solutions for these tasks, they still fail to perform well in handling continuous sentences and visual quality aspects. The recent advancements in deep learning techniques envisioned new milestones in handling such complex tasks and producing impressive results. This paper proposes novel approaches to develop a deep neural framework for recognizing multilingual sign datasets and multimodal sign gestures. In addition to that, the proposed model generates sign gesture videos from spoken sentences. In the first fold, it deals with the sign gesture recognition tasks using a hybrid CNN-LSTM algorithm. The second fold uses the hybrid NMT-GAN techniques to produce high quality sign gesture videos. The proposed model has been evaluated using different quality metrics. We also compared the proposed model performance qualitatively using different benchmark sign language datasets. The proposed model achieves 98% classification accuracy and improved video quality in sign language recognition and video generation tasks.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.