Nowadays, many RGB-D saliency detection models have been proposed, and they are conducted based on the pixel-level depth data obtained by active depth sensors or dense stereo matching algorithms in advance. However, the pixel-level depth maps may have a negative impact on saliency ranking, especially when they are inconsistent or invalid within the object area. It can be found that people tend to observe an object as a whole and ignore the details of the target surface. Inspired by this characteristic, we propose a salient object detection method via joint perception of region-level spatial distribution and color contrast. First, a twice segmentation strategy based on multifeature fusion is used to compute the region-level information. Then, the region-level spatial distribution maps are constructed instead of pixel-level depth maps, which is helpful for avoiding the interference of dense depth information. To improve the integrity of object detection, color saliency maps are also computed based on regional segmentation information. After that, we adopt a fusion strategy to achieve the effective complementarity of the two kinds of information. Two optimization strategies are employed to further improve the results of saliency ranking. Experimental results on two benchmark datasets demonstrate that the proposed method has better performance than most of the state-of-the-art methods, and it also shows competitive capability compared with deep learning-based methods.
Read full abstract