Abstract

Sports news is a type of discourse that is characterized by a specific vocabulary, style, and tone, and it is typically focused on conveying information about sporting events, athletes, and teams. Thematic context-based deep learning is a powerful approach that can be used to analyze and interpret various forms of natural language, including the discourse expression of sports news. An application model of sign language and lip language recognition based on deep learning is proposed to facilitate people with hearing impairment to easily obtain sports news content. First, the lip language recognition system is constructed; next, MobileNet lightweight network combined with Long-Short Term Memory (LSTM) is used to extract lip reading features. ResNet-50 residual network structure isadopted to extract the features of sign language; finally, the convergence, accuracy, precision and recall of the model are verified respectively. The results show that the loss of training set and test set converges gradually with the increase of iteration times; the lip language recognition model and the sign language recognition model basically tend to be stable after 14 iterations and 12 iterations, respectively, suggesting a better convergence effect of sign language recognition. The accuracy of sign language recognition and lip language recognition is 98.9% and 87.7%, respectively. In sign language recognition, the recognition accuracy of numbers 1, 2, 4, 6 and 8 can reach 100%. In lip language recognition, the recognition accuracy of numbers 2, 3 and 9 is relatively higher. This exploration can facilitate hearing-impaired people to quickly obtain the relevant content in sports news videos, and also provide help for their communication.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call