Abstract

No-reference image quality assessment (NR-IQA), which devotes to predicting image quality without relying on the corresponding pristine counterpart, develops rapidly in recent years. However, little investigation has been dedicated to quality assessment of realistic night-time images. Existing NR-IQA algorithms laboriously cope with this night-time scenario since complicated authentic distortions such as low contrast, blurred details, and reduced visibility usually appear on it. In this paper, we propose an end-to-end NR-IQA model to meet this challenge based on a multi-stream deep convolutional neural network (DCNN). Two streams, brightness-aware CNN and naturalness-aware CNN are constructed respectively by a brightness-altered image identification task with a self-established dataset and a quality-prediction regression task with an existing authentically-distorted IQA dataset to improve quality-aware initializations. In this case, given the quick convergence and little transformation in the lower layers, a shallow-layer-shared architecture is explored to reduce computational cost. Finally, the features of these two pipelines are collected by an effective pooling method and then concatenated as the image representation for fine-tuning. The effectiveness and efficiency of the proposed method are verified by several different experiments on the NNID, CCRIQ and LIVE Challenge databases. Furthermore, the superiority of wide applications such as for contrast-distorted and driving scenarios is demonstrated on the CID2013, CCID2014 and BBD-100k databases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call