Abstract

Mobile games have played an increasingly significant role in people's leisure lives in recent years, thanks to the fast expansion of the gaming industry and the widespread use of mobile devices. The aesthetic quality of game pictures is a very important factor that attracts users' interest. However, evaluating the aesthetic quality of mobile game pictures is difficult since the painting styles of games vary greatly and the evaluation criteria are also diversified. In this paper, we propose a multi-task deep learning based method, which is able to predict the aesthetic quality of mobile game images in multiple dimensions (MGQA). The proposed model consists of two modules, a feature extraction module and a quality regression module. We extract quality-aware features from intermediate layers of the deep convolution neural network (CNN) and then incorporate them into the final feature representation in the feature extraction module, allowing the model to fully use visual information from low to high levels. The quality regression module uses fully connected (FC) layers to map quality-aware features into quality scores across multiple dimensions. The multi-dimensional aesthetic quality scores are trained using a multi-task learning approach, in which quality-aware features are shared across multiple dimensional quality prediction tasks. Finally, several key factors which help the proposed model perform better are analyzed. The experimental results indicate that our proposed method not only achieves the greatest performance on mobile game images, but also is applicable to natural scene images.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call