Abstract
Effective visual attention modeling is a key factor that helps enhance the overall Quality of Experience (QoE) of VR/AR data. Although a huge number of algorithms have been developed in recent years to detect salient regions in flat-2D images, the research on 360-degree image saliency is limited. In this study, we propose a superpixel-level saliency detection model for 360-degree images by figure-ground law of Gestalt theory. First, the input image is segmented into superpixels. CIE Lab color space is then used to extract the perceptual features. We extract luminance and texture features for 360-degree images from L channel, while color features are extracted from a and b channels. We compute two components for saliency prediction by figure-ground law of Gestalt theory: feature contrast and boundary connectivity. The feature contrast is computed on superpixel level by luminance and color features. The boundary connectivity is predicted for background measure and it describes the spatial layout of image region with two image boundaries (upper and lower boundary). The final saliency map of 360-degree image is calculated by fusing feature contrast map and boundary connectivity map. Experimental results on a public eye tracking database of 360-degree images show promising performance of saliency prediction from the proposed method.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.