Modeling 3D visual saliency has received great attention due to the development of emerging 3D display technologies. Traditional methods relying on low-level features may not be efficient in interpreting 3D visual content from high-level semantic perspective. Despite numerous efforts dedicated to this area, existing 3D visual saliency detection methods do not necessarily excel in exploring the stereoscopic image saliency driven by the intra-view and inter-view dependencies among left and right views. In this paper, we propose a visual saliency detection method for stereoscopic images grounded on adaptive viewpoint feature enhancement via binocular vision. More specifically, the correlation among left and right views is investigated through a delicately designed binocular stereoscopic saliency feature aggregation module, enabling the generation of more representative saliency features towards binocular vision. Subsequently, to further aggregate the saliency features in multiple scales, we design a progressive attention-based saliency feature pyramid extraction module to effectively integrate the features from top-level to down-level based on the network hierarchy mechanism. The saliency maps are ultimately produced for stereoscopic images by evaluating the obtained saliency features. In addition, we create a stereoscopic image saliency dataset (SIS-3D) that includes 1086 stereoscopic image pairs with various content and their corresponding human eye fixation annotations, aiming to further facilitate the research on visual saliency detection for stereoscopic images. Extensive experiments demonstrate that our proposed method improves CC by an average of 4.02% compared to representative counterparts on the newly built saliency dataset and another publicly available dataset.
Read full abstract