Abstract

Depth-image-based rendering (DIBR) is widely used in 3DTV, free-viewpoint video, and interactive 3D graphics applications. Typically, synthetic images generated by DIBR-based systems incorporate various distortions, particularly geometric distortions induced by object dis-occlusion. Ensuring the quality of synthetic images is critical to maintaining adequate system service. However, traditional 2D image quality metrics are ineffective for evaluating synthetic images as they are not sensitive to geometric distortion. In this paper, we propose a novel no-reference image quality assessment method for synthetic images based on convolutional neural networks, introducing local image saliency as prediction weights. Due to the lack of existing training data, we construct a new DIBR synthetic image dataset as part of our contribution. Experiments were conducted on both the public benchmark IRCCyN/IVC DIBR image dataset and our own dataset. Results demonstrate that our proposed metric outperforms traditional 2D image quality metrics and state-of-the-art DIBR-related metrics.

Highlights

  • With the development of mobile devices and wirelessManuscript received: 2018-12-12; accepted: 2019-01-27image

  • In contrast to existing depth-image-based rendering (DIBR)-related metrics, which heavily rely on handcrafted features, we propose a no-reference (NR) DIBR synthetic image quality assessment method using convolutional neural networks (CNNs) and local image saliency based weighting

  • We provide the details of our experimental settings and give a performance comparison for our proposed DIBR synthetic image quality metric on the benchmark IRCCyN/IVC DIBR image dataset and our own dataset

Read more

Summary

Introduction

Most are extensions of existing 2D IQA methods, assuming that DIBR synthetic images follow the same natural scene statistics (NSS) as traditional 2D images [6,7,8,9]. Their improvements mainly rely on carefully designed handcrafted features. In contrast to existing DIBR-related metrics, which heavily rely on handcrafted features, we propose a no-reference (NR) DIBR synthetic image quality assessment method using convolutional neural networks (CNNs) and local image saliency based weighting. We exploit the power of CNNs for synthetic image feature extraction, while utilizing the sensitivity of local image saliency to geometric distortions to refine the predicted scores.

Image quality assessment
DIBR-related image quality assessment
Our approach
Overview
Network architecture
Optimization
Construction of training database
Subjective testing
Processing of raw subjective scores
Experimental results
Training implementation
Method
Cross validation
Ablation study
Preprocessing
Local image saliency based weighting
Network depth
Application
Baseline model of reference viewpoint prediction
Performance
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.