Abstract
Nowadays, blind image quality assessment (BIQA) has been intensively studied with machine learning, such as support vector machine (SVM) and k-means. Existing BIQA metrics, however, do not perform robust for various kinds of distortion types. We believe this problem is because those frequently used traditional machine learning techniques exploit shallow architectures, which only contain one single layer of nonlinear feature transformation, and thus cannot highly mimic the mechanism of human visual perception to image quality. The recent advance of deep neural network (DNN) can help to solve this problem, since the DNN is found to better capture the essential attributes of images. We in this paper therefore introduce a new Deep learning based Image Quality Index (DIQI) for blind quality assessment. Extensive studies are conducted on the new TID2013 database and confirm the effectiveness of our DIQI relative to classical full-reference and state-of-the-art reduced- and no-reference IQA approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.