The explosion of the global anime market is poised to redefine the entertainment industry, boasting a valuation of USD 31.23 billion as of 2023 and projected to climb at a CAGR of 9.8% until 2030. At the heart of this growth is a vibrant community of artists and enthusiasts who navigate challenges such as artist scarcity and the lack of advanced tools for qualitative feedback and effective content promotion. Our study targets the core of anime creation sketching by evaluating the essential drawing elements and assessing their implementation in professional-quality anime portraits. We introduce an automated approach to predicting the anime quality using advances in deep learning. Utilizing transfer learning, we enhance three pre-trained models—MobileNetV2, ResNet50, and VGG16—with a customized dense layer, refining their capabilities for the binary classification of anime character sketches. The dataset employed comprises 155 images, categorized as 'Good' and 'Bad' to reflect the quality of sketches. A balanced split into training, validation, and testing subsets ensures a quantitative evaluation of data not seen previously by the model. The models, pre-trained on ImageNet and fine-tuned with our dataset, demonstrate varied sensitivity to hyperparameters, with the MobileNetV2 and ResNet50 model attaining a peak validation accuracy of 94% and the highest test accuracy of 79% indicating its potential as a robust tool for quality assessment in the anime industry. An interesting outcome of the research is that a lightweight MobileNetV2 model with much fewer parameters compared to other models resulted in the highest test accuracy.
Read full abstract