With the development of the internet, the number of short video platform users has increased quickly. People's social entertainment mode has gradually changed from text to short video, generating many multimodal data. Therefore, traditional single-modal sentiment analysis can no longer fully adapt to multimodal data. To address this issue, this study proposes a short video sentiment analysis model based on multimodal feature fusion. This model analyzes the text, speech, and visual content in the video. Meanwhile, the information of the three modalities is integrated through a multi-head attention mechanism to analyze and classify emotions. The experimental results showed that when the training set size was 500, the recognition accuracy of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 0.96. The F1 score was 98, and the average absolute error value was 0.21. When the validation set size was 400, the recognition time of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 2.1 s. When the iterations were 60, the recognition time of the multimodal sentiment analysis model based on modal contribution recognition and multi-task learning was 0.9 s. The experimental results show that the proposed multimodal sentiment analysis model based on modal contribution recognition and multi-task learning has good model performance and can accurately identify emotions in short videos.
Read full abstract