Abstract

With the continuous update and iteration of network technology and technological innovation, the handheld smart media of college students will become more and more sensitive. With the advancement of economic globalization, various ideologies and cultures in the world will rapidly invade, and the “pan-entertainment” of online media may intensify. Only through the government’s supervision function and the self-discipline of the internet industry, we can strictly control and screen positive values. In order to better establish the correct employment value orientation of university students and further analyze the importance of the “pan-entertainment” behavior image recognition of college students, this study analyzes the related technology and basic theory of behavior recognition. After introducing several mainstream methods, the traditional dual-stream convolutional network method is improved, and the time information and spatial information extracted by the two channels are discussed for the weighted fusion of feature maps. Finally, using R(2 + 1)D structure and dual-stream network structure design, a deep learning-based spatiotemporal convolution behavior recognition algorithm is proposed. The proposed algorithm is tested and analyzed on the datasets UCF101 and HMDB51. The specific work content is as follows: (1) to summarize the widely used video behavior classification methods proposed so far and discuss the future development. Then, it mainly analyzes the existing technical bottlenecks of some methods based on deep learning methods and summarizes and explores an efficient, stable, and accurate spatiotemporal feature joint extraction and learning method theory. (2) The design of spatiotemporal convolutional network algorithm framework is proposed, the method of segmentation processing of long video is studied, the improvement of the dual-stream network decision-level fusion method is studied, and the R(2 + 1)D network is reorganized. The network algorithm is trained and tested on the UCF-101 dataset and HMDB-51 dataset under the condition of calling the pretrained model. Finally, the accuracy is compared with the existing classic algorithms to obtain better accuracy, which proves the effectiveness of the algorithm for the “pan-entertainment” behavioral image recognition of contemporary college students.

Highlights

  • The “pan-entertainment” culture is increasingly permeating every corner of people’s life, and college students are most active among the consumer groups of the “panentertainment” culture

  • Due to the immature development of ideology, psychology, and other aspects and the limited ability to recognize “pan-entertainment” culture, some college students are affected by the negative influence of “pan-entertainment” culture, but it is difficult to identify and analyze such behavior images

  • With the rapid development of computer science and the modern internet world’s demand for massive amounts of pictures and video information, contemporary college students’ network “panentertainment” image behavior recognition, that is, the machine acquires the video taken by the camera, and self-learning after preprocessing, combined with scene recognition, detect the actions of college students in the image. e machine is made smarter and more able to approach the characteristics of humans to detect images

Read more

Summary

Introduction

The “pan-entertainment” culture is increasingly permeating every corner of people’s life, and college students are most active among the consumer groups of the “panentertainment” culture. Wang et al [1] adjusted and encoded DMM into pseudo-RGB images, converted their spatial and temporal behavior information into texture information, and fused three independent ConvNets networks for training and recognition. Rahmani and Mian [2] proposed a deep sequence learning view-invariant human behavior model. E method is to input each frame of a deep image into a specific convolutional neural network to learn advanced features and transfer the human behavior in the unknown image to the model. E framework calculates the position deviation of 3D bone joint points and uses the space independent nature of the joint points in the bag-of-words model to complete the vector offset and recognize human behavior. Wang et al [3] constructed three different types of dynamic depth images, namely, dynamic depth images, dynamic depth regular images, and dynamic depth motion regular images to extract behavioral features in in-depth image sequences. erefore, image behavior recognition technology has become one of the important contents of research and experimentation by scholars at home and abroad

Behavior Image Recognition Technology Based on Deep Learning
Experiment and Result Analysis
Findings
Method
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.