Abstract
It is known that the human visual system (HVS) employs independent processes (distortion detection and artifact perception—also often referred to as near-threshold and suprathreshold distortion perception) to assess video quality for various distortion levels. Visual masking effects also play an important role in video distortion perception, especially within spatial and temporal textures. In this paper, a novel perception-based hybrid model for video quality assessment is presented. This simulates the HVS perception process by adaptively combining noticeable distortion and blurring artifacts using an enhanced nonlinear model. Noticeable distortion is defined by thresholding absolute differences using spatial and temporal tolerance maps that characterize texture masking effects, and this makes a significant contribution to quality assessment when the quality of the distorted video is similar to that of the original video. Characterization of blurring artifacts, estimated by computing high frequency energy variations and weighted with motion speed, is found to further improve metric performance. This is especially true for low quality cases. All stages of our model exploit the orientation selectivity and shift invariance properties of the dual-tree complex wavelet transform. This not only helps to improve the performance but also offers the potential for new low complexity in-loop application. Our approach is evaluated on both the Video Quality Experts Group (VQEG) full reference television Phase I and the Laboratory for Image and Video Engineering (LIVE) video databases. The resulting overall performance is superior to the existing metrics, exhibiting statistically better or equivalent performance with significantly lower complexity.
Highlights
A SSESSING perceptual quality is one of the most critical yet challenging areas in image and video processing, providing a fundamental tool that will underpin the development of new perceptual compression methods
In order to overcome the overscan effect of video display, all videos are cropped by 20 pixels in all four spatial directions, as suggested by Video Quality Experts Group (VQEG).4
Due to the sampling nature of the DT-CWT used in PVM, we separate each frame in the Laboratory for Image and Video Engineering (LIVE) database into two fields as we did for the VQEG database
Summary
Abstract— It is known that the human visual system (HVS) employs independent processes (distortion detection and artifact perception— often referred to as near-threshold and suprathreshold distortion perception) to assess video quality for various distortion levels. A novel perceptionbased hybrid model for video quality assessment is presented This simulates the HVS perception process by adaptively combining noticeable distortion and blurring artifacts using an enhanced nonlinear model. Characterization of blurring artifacts, estimated by computing high frequency energy variations and weighted with motion speed, is found to further improve metric performance. This is especially true for low quality cases. All stages of our model exploit the orientation selectivity and shift invariance properties of the dual-tree complex wavelet transform This helps to improve the performance and offers the potential for new low complexity in-loop application.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Circuits and Systems for Video Technology
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.