Abstract

Reliably predicting video quality as perceived by humans remains challenging and is of high practical relevance. A significant research trend is to investigate visual saliency and its implications for video quality assessment. Fundamental problems regarding how to acquire reliable eye-tracking data for the purpose of video quality research and how saliency should be incorporated in objective video quality metrics (VQMs) are largely unsolved. In this paper, we propose a refined methodology for reliably collecting eye-tracking data, which essentially eliminates bias induced by each subject having to view multiple variations of the same scene in a conventional experiment. We performed a large-scale eye-tracking experiment that involved 160 human observers and 160 video stimuli distorted with different distortion types at various degradation levels. The measured saliency was integrated into several best known VQMs in the literature. With the assurance of the reliability of the saliency data, we thoroughly assessed the capabilities of saliency in improving the performance of VQMs, and devised a novel approach for optimal use of saliency in VQMs. We also evaluated to what extent the state-of-the-art computational saliency models can improve VQMs in comparison to the improvement achieved by using "ground truth" eye-tracking data. The eye-tracking database is made publicly available to the research community.

Highlights

  • T HE last few decades have witnessed a phenomenal growth in the use of digital videos in our everyday lives

  • We aim to provide accurate quantitative evidence, by means of an exhaustive statistical evaluation, on to what extent saliency can benefit Video quality metrics (VQMs) depending on the distortion types assessed and the VQMs used

  • 6) We evaluate thoroughly to what extent state-ofthe-art saliency models can improve the performance of VQMs compared to improvement achieved by using eye-tracking data

Read more

Summary

Introduction

T HE last few decades have witnessed a phenomenal growth in the use of digital videos in our everyday lives. Video signals are vulnerable to distortion due to causes such as acquisition errors, data compression, noisy transmission channels and the limitations in rendering devices. The ultimate video content received or consumed by the end user largely differs in perceived quality depending on the application. The reduction in video quality may affect viewers’ visual experiences or lead to interpretation mistakes in video-based inspection tasks. Finding ways to effectively control and improve video quality has become a focal concern in both academia and industry [1]. Video quality metrics (VQMs), which represent computational models for automatic assessment of perceived video

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.