Abstract

As one of the most widely used methods in deep learning technology, convolutional neural networks have powerful feature extraction capabilities and nonlinear data fitting capabilities. However, the convolutional neural network method still has disadvantages such as complex network model, too long training time and excessive consumption of computing resources, slow convergence speed, network overfitting, and classification accuracy that needs to be improved. Therefore, this article proposes a dense convolutional neural network classification algorithm based on texture features for images in virtual reality videos. First, the texture feature of the image is introduced as a priori information to reflect the spatial relationship between pixels and the unique characteristics of different types of ground features. Second, the grey level cooccurrence matrix (GLCM) is used to extract the grey level correlation features of the image in space. Then, Gauss Markov Random Field (GMRF) is used to establish the statistical correlation characteristics between neighbouring pixels, and the extracted GLCM-GMRF texture feature and image intensity vector are combined. Finally, based on DenseNet, an improved shallow layer dense convolutional neural network (L-DenseNet) is proposed, which can compress network parameters and improve the feature extraction ability of the network. The experimental results show that compared with the current classification method, this method can effectively suppress the influence of coherent speckle noise and obtain better classification results.

Highlights

  • As one of the most widely used methods in deep learning technology, convolutional neural networks have powerful feature extraction capabilities and nonlinear data fitting capabilities

  • Based on DenseNet, an improved shallow layer dense convolutional neural network (L-DenseNet) is proposed, which can compress network parameters and improve the feature extraction ability of the network. e experimental results show that compared with the current classification method, this method can effectively suppress the influence of coherent speckle noise and obtain better classification results

  • In order to effectively solve the problems of network degradation caused by too many network parameters, excessive occupation of computing resources, and storage resources and balance the relationship between network parameters and model accuracy, in this paper, a dense convolutional neural network classification algorithm based on texture features is proposed for images in virtual reality video

Read more

Summary

Related Works

Virtual reality technology involves multiple application fields [10, 11]. Image classification refers to the image processing technology that extracts the features of the input original image by related algorithms to obtain the key features of the target and classifies it into a known category. The image recognition technology that relies on the global features of the image is susceptible to illumination, occlusion, missing, size change, and other factors, and the image data processed is more complex It did not have the desired effect. Rajagopal et al [30] used a convolutional neural network training method to improve the accuracy of image recognition. E use of deep learning methods to extract features has a strong ability to describe target data and can achieve better classification performance. E artificially designed low-level feature method is cumbersome in feature extraction tasks, and the extracted features do not describe the target image well, which makes the classification accuracy of the image not high. e use of deep learning methods to extract features has a strong ability to describe target data and can achieve better classification performance. erefore, this paper takes the texture feature of the image as input and sends it to the network for training, to realize the classification of video images

Abnormal Behaviour Detection Algorithm
Results and Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call