Abstract

Abstract In the Internet intelligent teaching platform, students’ demand for English cultural content is increasingly obvious. To help students quickly locate the overall content of resources in online autonomous learning, this study constructs a video annotation model for online teaching. This method classifies text by designing an optimized Bidirectional Encoder Representation from the Transformers model and designs a Text Rank keyword extraction model that integrates external knowledge and semantic feature weights. The extraction of knowledge points contained in audio and video resources can be realized. In the experimental data set, a relatively complete video content summary could be obtained by combining the first three sentences with the last two sentences. The F1 value of the classification model was up to 91.3%. In addition, the BERT-T model proposed in this article had the best effect on the experiment. Compared with the original BERT model, the macro-F1 was 0.8% higher and 0.5% higher than the Ro BERTA model. In the keyword extraction experiment, B-Text Rank was 2.19 and 2.85% higher than the traditional Text Rank in the two datasets. The experiment shows that the BERT-Text Rank network resource annotation model has excellent application performance in English online autonomous teaching and could guide students to learn.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call