Abstract
In this paper, we present a multi-channel convolution neural network (CNN) for blind 360-degree image quality assessment (MC360IQA). To be consistent with the visual content of 360-degree images seen in the VR device, our model adopts the viewport images as the input. Specifically, we project each 360-degree image into six viewport images to cover omnidirectional visual content. By rotating the longitude of the front view, we can project one omnidirectional image onto lots of different groups of viewport images, which is an efficient way to avoid overfitting. MC360IQA consists of two parts, multi-channel CNN and image quality regressor. Multi-channel CNN includes six parallel ResNet34 networks, which are used to extract the features of the corresponding six viewport images. Image quality regressor fuses the features and regresses them to final scores. The results show that our model achieves the best performance among the state-of-art full-reference (FR) and no-reference (NR) image quality assessment (IQA) models on the available 360-degree IQA database.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.