Abstract

It is known that the human visual system (HVS) employs independent processes (distortion detection and artifact perception—also often referred to as near-threshold and suprathreshold distortion perception) to assess video quality for various distortion levels. Visual masking effects also play an important role in video distortion perception, especially within spatial and temporal textures. In this paper, a novel perception-based hybrid model for video quality assessment is presented. This simulates the HVS perception process by adaptively combining noticeable distortion and blurring artifacts using an enhanced nonlinear model. Noticeable distortion is defined by thresholding absolute differences using spatial and temporal tolerance maps that characterize texture masking effects, and this makes a significant contribution to quality assessment when the quality of the distorted video is similar to that of the original video. Characterization of blurring artifacts, estimated by computing high frequency energy variations and weighted with motion speed, is found to further improve metric performance. This is especially true for low quality cases. All stages of our model exploit the orientation selectivity and shift invariance properties of the dual-tree complex wavelet transform. This not only helps to improve the performance but also offers the potential for new low complexity in-loop application. Our approach is evaluated on both the Video Quality Experts Group (VQEG) full reference television Phase I and the Laboratory for Image and Video Engineering (LIVE) video databases. The resulting overall performance is superior to the existing metrics, exhibiting statistically better or equivalent performance with significantly lower complexity.

Highlights

  • A SSESSING perceptual quality is one of the most critical yet challenging areas in image and video processing, providing a fundamental tool that will underpin the development of new perceptual compression methods

  • In order to overcome the overscan effect of video display, all videos are cropped by 20 pixels in all four spatial directions, as suggested by Video Quality Experts Group (VQEG).4

  • Due to the sampling nature of the DT-CWT used in PVM, we separate each frame in the Laboratory for Image and Video Engineering (LIVE) database into two fields as we did for the VQEG database

Read more

Summary

A Perception-Based Hybrid Model for Video Quality Assessment

Abstract— It is known that the human visual system (HVS) employs independent processes (distortion detection and artifact perception— often referred to as near-threshold and suprathreshold distortion perception) to assess video quality for various distortion levels. A novel perceptionbased hybrid model for video quality assessment is presented This simulates the HVS perception process by adaptively combining noticeable distortion and blurring artifacts using an enhanced nonlinear model. Characterization of blurring artifacts, estimated by computing high frequency energy variations and weighted with motion speed, is found to further improve metric performance. This is especially true for low quality cases. All stages of our model exploit the orientation selectivity and shift invariance properties of the dual-tree complex wavelet transform This helps to improve the performance and offers the potential for new low complexity in-loop application.

INTRODUCTION
BACKGROUND
Subjective Databases for Objective Quality Assessment
Human Visual Perception
Perception-Based Image and Video Quality Metrics
PROPOSED ALGORITHM
Noticeable Distortion
Preliminary Results
Blurring Artifacts
Pooling Stage
Parameter Determination
RESULTS AND DISCUSSION
Average Performance Over Twofold Validation on the VQEG FRTV Phase I Database
Performance on Different Distortion Types
F-Statistics on Both VQEG and LIVE
Complexity and Latency
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call