Abstract

Sign language is the main way for the hearing-impaired people, a huge special group, to communicate with others in society. The use of new information technology in sign language recognition and translation is helpful for smooth communication between hearing impaired and healthy people. With the development of the Transformer network and attention mechanism in machine translation, the study has entered a new process. Aiming at the phenomenon of longer-term dependency, based on Transformer, we propose a continuous sign language translation model that incorporates the sequence relative position into the attention mechanism, replacing the original absolute position encoding. Combining with movement characteristics, we use image difference technology to dynamically calculate difference threshold and use image blur detection to adaptively extract key frames. Experimental results on RWTH-PHOENIX-Weather 2014T Dataset verify the effectiveness of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call