Abstract

The rapid classification of ancient murals is a pressing issue confronting scholars due to the rich content and information contained in images. Convolutional neural networks (CNNs) have been extensively applied in the field of computer vision because of their excellent classification performance. However, the network architecture of CNNs tends to be complex, which can lead to overfitting. To address the overfitting problem for CNNs, a classification model for ancient murals was developed in this study on the basis of a pretrained VGGNet model that integrates a depth migration model and simple low-level vision. First, we utilized a data enhancement algorithm to augment the original mural dataset. Then, transfer learning was applied to adapt a pretrained VGGNet model to the dataset, and this model was subsequently used to extract high-level visual features after readjustment. These extracted features were fused with the low-level features of the murals, such as color and texture, to form feature descriptors. Last, these descriptors were input into classifiers to obtain the final classification outcomes. The precision rate, recall rate and F1-score of the proposed model were found to be 80.64%, 78.06% and 78.63%, respectively, over the constructed mural dataset. Comparisons with AlexNet and a traditional backpropagation (BP) network illustrated the effectiveness of the proposed method for mural image classification. The generalization ability of the proposed method was proven through its application to different datasets. The algorithm proposed in this study comprehensively considers both the high- and low-level visual characteristics of murals, consistent with human vision.

Highlights

  • Ancient Chinese murals have a long history and reflect the social and cultural characteristics of life at the time of their creation

  • In view of the current state of research as summarized above, in the study presented in this paper, we mainly investigated a means of comprehensively representing high- and low-level features of ancient mural images

  • Equation (2) presents the normalization process, in which the number of pixels of each color grade is normalized and divided by the total number of image pixels N to obtain the final characteristic vector Hist: Visual Geometry Group network (VGGNet) model VGGNet is an improvement over AlexNet; these models won first and second place, respectively, in the location and classification competitions of the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC)

Read more

Summary

Introduction

Ancient Chinese murals have a long history and reflect the social and cultural characteristics of life at the time of their creation. In this study, we modified the VGGNet model through transfer learning to obtain high-level features of murals.

Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.