Abstract
Sign Language Recognition (SLR) aims at translating the Sign Language (SL) into speech or text, so as to facilitate the communication between hearing-impaired people and the normal people. This problem has broad social impact, however it is challenging due to the variation for different people and the complexity in sign words. Traditional methods for SLR generally use handcrafted feature and Hidden Markov Models (HMMs) modeling temporal information. But reliable handcrafted features are difficult to design and not able to adapt to the large variations of sign words. To approach this problem, considering that Long Short-Term memory (LSTM) can model the contextual information of temporal sequence well, we propose an end-to-end method for SLR based on LSTM. Our system takes the moving trajectories of 4 skeleton joints as inputs without any prior knowledge and is free of explicit feature design. To evaluate our proposed model, we built a large isolated Chinese sign language vocabulary with Kinect 2.0. Experimental results demonstrate the effectiveness of our approach compared with traditional HMM based methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.