Abstract

Image Quality Assessment (IQA) remains a complex and challenging problem that has garnered great interest by the research community as well as industry. A majority of no-reference IQA metrics leverage handcrafted features which discriminates distortions from true image structures. Recently, deep neural networks (DNN) has been successfully applied on local image patches to estimate image quality, However, existing patch-based IQA approaches require manual patch selection processes as well as possesses high computational complexity. To mitigate such issues, this paper introduces a very-deep no-reference image condition evaluator (VeNICE), which leverages the power of deep learning using very deep neural networks to model the complex relationship between visual content and the perceived quality in a more global manner compared to existing patched-based methods. VeNICE leverages a very deep convolutional neural network architecture with intrinsic image decomposition capabilities, and is trained to learn to assess image quality based on training samples consisting of different distortions and degradation such as blur, Gaussian noise, and compression artifacts. Experimental results using the TID-2008 and LIVE r2 benchmark image quality datasets demonstrates that VeNICE was able to achieve strong quality prediction performance, being able to achieve similar performance as full-reference IQA methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call