Abstract

5G intelligent sensor network technology realizes the perception, processing, and transmission of information. It forms the three pillars of information technology together with computer technology and communication technology and is an important part of the Internet of Things technology. The 5G smart sensor network is a wireless communication module added to the sensor nodes, and a wireless communication network is formed by a large number of stationary or movable sensor nodes in the form of self-organization and multihop transmission. This paper proposes a keypoint feature extraction method based on deep learning, which can extract keypoint local features for matching. This method uses the convolutional network structure, which is pretrained based on the Siamese network structure and then adjusted to the ternary network structure to continue training to improve the accuracy. This paper proposes a high-art visual communication image classification based on multifeature extraction and classification decision fusion. In the data preprocessing stage, the correlation alignment algorithm is performed on the datasets of different domains (source domain and target domain) to reduce the difference in spatial distribution, and then, a multifeature extractor is designed to extract artistic visual communication images and spatial information. In the process, the multitask learning method is introduced to jointly train the networks of multiple data sets to reduce the degree of overfitting of the model, solve the problem of insufficient labeled samples in the target domain data set, and affect the classification accuracy of high-art visual communication images. Finally, the classification results are obtained through the fusion of voting decisions. The experimental results show that the advantage of this framework is that it utilizes the artistic visual communication image and spatial structure information from the source and target scenes, which can significantly reduce the dependence on the number of labeled samples in the target domain and improve the classification performance. In this paper, a dual-channel deep residual convolutional neural network is designed. The multiple convolution layers of the residual module in the network use hard parameters to share, so that the deep feature representation on the joint spatial spectrum dimension can be automatically extracted. The features extracted by the network are transferred to maximize the auxiliary role of the labeled samples in the source domain and avoid the negative transfer problem caused by the forced transfer between irrelevant samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call