Currently, continuous sign language recognition faces challenges such as difficulty in acquiring skeletal data, long training time for Three-Dimensional convolutional neural networks, and easy occlusion and blurring of hands. To address these problems, this paper proposes a continuous sign language recognition method based on target detection and coding sequence. The algorithm uses Dual-branch Shuffle Attention Mechanis-You Only Look Once version X (DSA-YOLOX) detection network to detect the head and hands, and encodes the sign language video according to the partition to achieve the transformation from Three-Dimensional to One-Dimensional; and then uses the proposed Bi-directional Long Short-Term Memory (BiLSTM) hand coding sequence classification model with jointly weighted Fast Dynamic Time Warping (FastDTW) to extract hand coding similarity and features while reducing the number of parameters to achieve the classification and recognition of unequal-length hand coding sequences. From the results of ablation experiments and comparison experiments, all parts of the improvement perform well. The word error rate (WER) of this paper's method is reduced by 21.26% compared to Dynamic Time Warping-Hidden Markov Model (DTW-HMM) and 11.53% compared to Long Short-Term Memory-A(LSTM-A); the Giga Floating-point Operations Per Second(GFLOPs) of the algorithm are reduced dramatically, which is about 1/13 of the Visual Alignment Constraint(VAC) model and 1/57 of the Spatial-Temporal Multi-Cue(STMC) model; and the algorithm takes better account of the speed and accuracy of sign language recognition.
Read full abstract