Abstract
Professor Mohamed Deriche of King Fahd University, Saudi Arabia, talks to us about the work behind the paper ‘Quantifying blur in colour images using higher order singular values’, page 1755 Professor Mohamed Deriche Digital images and videos are becoming an essential part of our quality of experience, and they provide a source of information for various social and economic aspects of society. Every minute, more than 3 million videos are viewed on YouTube, over 500,000 photos are posted online, and more than 20 million messages are exchanged on WhatsApp, many of which contain images. Unfortunately, with this substantial amount of internet traffic, and with a lack of control of content, the quality of images and videos posted suffers the most. During different processing stages (acquisition, transmission, compression, etc.), various kinds of artefacts are introduced. Among these, blur is the most commonly observed distortion, which appears due to the limitations of acquisition equipment (like out-of-of focus camera lens, low-lighting conditions, relative movement) and different processing stages before final viewing by the user. More importantly, blur affects edge information which is key to human perception of quality. In this Letter, we have introduced a no-reference blur assessment technique for colour images. The spatial and inter-channel correlations in colour images are exploited to quantify blur efficiently, rather than using the traditional luminance component only, or the individual colour channels in existing techniques. A colour image is considered as a 3rd-order tensor and is decomposed into different 2D matrices or unfoldings. The higher-order singular values are calculated for these unfoldings using conventional SVD. We show that the singular values in these unfoldings follow an exponentially decreasing curve, whereas the degree of exponent varies with the amount of blur, and hence is used as an objective quality score. The proposed blur metric was tested on distorted images from various publicly available databases, and the results validate its superiority and effectiveness compared to state-of-the-art blur assessment methods. The proposed technique could be embedded in camera sensors to provide photo-after-blur removal, and could be used in multimedia applications for best quality of experience delivery of videos to the end users. We also mentioned above adding the proposed work as a feature in different applications and systems providing multimedia content online. Quality of experience in the multimedia industry is driving technology to its best. The move towards more virtual reality applications and systems will substantially benefit from validated research work in objectively quantifying different manipulation effects on original content. Here, we are working with blur which is an important component in the spectrum of distortions, but the work can be extended to quantifying enhancement (such as retouching in the entertainment industry), or even expanding it to related applications such as image forging, watermarking, content, authentication, etc. This DSP research group within the EE Department at KFUPM is actively involved in developing new algorithms and systems for different applications in multimedia. These include: image and video quality assessment, compression, content retrieval, transmission, biometrics, crowd control, sign language recognition, etc. The work presented here has been developed in collaboration with Prof. Azeddine Beghdadi, the director of the L2TI Research Lab. at Univ. Paris 13. Special thanks and acknowledgements go to my student, Mr. Muhammad Qureshi, who implemented the algorithms developed here, tested these, and compared their performance to state-of-the-art techniques.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.