Abstract

Despite the recent success of deep learning in video-related tasks, deep models typically focus on the most discriminative features, ignoring other potentially non-trivial and informative contents. Such characteristic heavily constrains their capability to learn implicit visual grammars in sign videos behind the collaboration of different visual cues (i.e., hand shape, facial expression and body posture). To this end, we approach video-based sign language understanding with multi-cue learning and propose a spatial-temporal multi-cue (STMC) network to solve the vision-based sequence learning problem. Our STMC network consists of a spatial multi-cue (SMC) module and a temporal multi-cue (TMC) module. The SMC module learns to spatial representation of different cues with a self-contained pose estimation branch. The TMC module models temporal corrections from intra-cue and inter-cue perspectives to explore the collaboration of multiple cues. A joint optimization strategy and a segmented attention mechanism are designed to make the best of multi-cue sources for SL recognition and translation. To validate the effectiveness, we perform experiments on three large-scale sign language benchmarks: PHOENIX-2014, CSL and PHOENIX-2014-T. Experimental results demonstrate that the proposed method achieves new state-of-the-art performance on all three benchmarks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call