Abstract

Nowadays the problem of image quality assessment (IQA) for screen content images (SCIs) has become a research hotspot as they are ubiquitous in multimedia applications. Although the quality assessment of natural images (NIs) has been continuously developed in the past few decades, few NI-oriented IQA methods can be directly applied on SCIs due to different visual characteristics between them. In this paper, we present a no-reference quality prediction approach considering the content information of SCIs, which is based on dual-channel multi-task convolutional neural network. First, we segment a SCI into small patches and classify them as the textual patches and the pictorial patches. Then, we devise a novel dual-channel convolutional neural network (CNN) to predict the quality of textual patches and pictorial patches. Finally, we propose an effective adaptive weighting strategy for quality score aggregation. The proposed CNN is built on an end-to-end multi-task learning framework, which assists the SCI quality prediction task through the histogram of oriented gradient (HOG) feature prediction task to learn a better mapping between the input patch and its quality score. The adaptive weighting strategy further improves the representation ability of each SCI patch. Experimental results on two largest SCI-oriented databases demonstrate that the proposed method outperforms most of the state-of-the-art no-reference IQA methods and the full-reference IQA methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.