Abstract

In this paper, a high-level semantic recognition model is used to parse the video content of human sports under engineering management, and the stream shape of the previous layer is embedded in the convolutional operation of the next layer, so that each layer of the convolutional neural network can effectively maintain the stream structure of the previous layer, thus obtaining a video image feature representation that can reflect the image nearest neighbor relationship and association features. The method is applied to image classification, and the experimental results show that the method can extract image features more effectively, thus improving the accuracy of feature classification. Since fine-grained actions usually share a very high similarity in phenotypes and motion patterns, with only minor differences in local regions, inspired by the human visual system, this paper proposes integrating visual attention mechanisms into the fine-grained action feature extraction process to extract features for cues. Taking the problem as the guide, we formulate the athlete's tacit knowledge management strategy and select the distinctive freestyle aerial skills national team as the object of empirical analysis, compose a more scientific and organization-specific tacit knowledge management program, exert influence on the members in the implementation, and revise to form a tacit knowledge management implementation program with certain promotion value. Group behavior can be identified by analyzing the behavior of individuals and the interaction information between individuals. Individual interactions in a group can be represented by individual representations, and the relationship between individual behaviors can be analyzed by modeling the relationship between individual representations. The performance improvement of the method on mismatched datasets is comparable between the long-short time network based on temporal information and the language recognition method with high-level semantic embedding vectors, with the two methods improving about 12.6% and 23.0%, respectively, compared with the method using the original model and with the i-vector baseline system based on the support vector machine classification method with radial basis functions, with performance improvements about 10.10% and 10.88%, respectively.

Highlights

  • With the continuous development of information technology, the way people obtain and store massive video information keeps developing towards diversification, and video information gradually becomes the mainstream multimedia data carrier

  • Video semantic concept analysis refers to the generalized description of video content after obtaining video sequences, and the content of events, scenes, objects, and so forth is the Computational Intelligence and Neuroscience multicategory semantic information contained in semantic concepts

  • The impact of different coding methods combined with normalization methods on the classification performance of probabilistic implicit semantic analysis models is focused on, and it is found experimentally that local soft assignment coding combined with exponential normalization methods substantially improves the recognition performance; the impact of principal component analysis preprocessing raw features on performance is examined, and when the features contain more noisy components, the computational effort is significantly reduced, while the classification recognition performance is even improved when the features contain more noisy components

Read more

Summary

Introduction

With the continuous development of information technology, the way people obtain and store massive video information keeps developing towards diversification, and video information gradually becomes the mainstream multimedia data carrier. In the context of huge video data resources, users face the challenge of how they can efficiently retrieve video resources according to their interests [1]. Erefore, it is necessary to classify and organize the massive video resources intelligently to facilitate users to retrieve according to their preferences. Video semantic analysis technology can annotate and classify important semantic information in videos, and users can retrieve it according to their preferred categories, which improves the efficiency of users’ access to information. Traditional manual annotation methods can achieve the understanding and description of video semantic concepts to a certain extent, the time and labor cost of manual annotation are huge, subjective, and difficult to cross the semantic gap between the underlying features and the semantic understanding of video data, and its annotation speed cannot achieve efficient classification and organization of video data [3]. Traditional manual annotation methods can achieve the understanding and description of video semantic concepts to a certain extent, the time and labor cost of manual annotation are huge, subjective, and difficult to cross the semantic gap between the underlying features and the semantic understanding of video data, and its annotation speed cannot achieve efficient classification and organization of video data [3]. erefore, in recent years, researchers have focused their research on how they can automatically access the semantic concepts of video data and annotate, classify, and organize rich video data. is research has significant academic and applied value and helps to improve video management techniques to make them more complete and more efficient

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.