Abstract

Big data brings more challenges to image quality assessment (IQA). First of all, reference images are not always available, so full-reference IQA (FR-IQA) and reduced-reference IQA (RR-IQA) can be problematic for big data. Because of no requirement for any pristine knowledge, no-reference IQA (NR-IQA) has recently received a great deal of attention. In recent years, deep convolution neural network (CNN) has achieved many successes in visual recognition tasks. Researches in literature have shown that NR-IQA metrics can be learned from features extracted using deep CNN. But existing CNN-based IQA models focus on local spatial information but ignore global spatial structure. In this paper, we propose a NR-IQA model learns from multi-scale CNN (MSCNN). Multi-scale mechanism has boosted the performances of many singe-scale methods in a variety of applications. MSCNN is a multi-scale CNN model which uses fully connected layers on top of multiple single-scale CNN models which responsible for feature extraction at different scales. In this way, the features extracted at multiple scales work together to assess image quality. Extensive experiments are conducted on the LIVE database and the experimental results validate the effectiveness of our MSCNN index compared to typical FR-IQA and state-of-the-art NR-IQA metrics. Furthermore, the results of cross database experiments show that MSCNN has good generalization ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call