Abstract

Visual saliency detection, aimed at the simulation of human visual system (HVS), has drawn wide attention in recent decades. Reconstruction based saliency detection models are established methods for saliency detection, which predict unexpected regions via linear combination or auto-encoder network. However, these models are ineffective in dealing with images due to the loss of spatial information caused by the conversion from images to vectors. In this paper, a novel approach is proposed to solve this problem. The core is a deep reconstruction model, i.e., convolutional neural network for reconstruction stacked with auto-encoder (CNNR for short). On the one hand, the use of CNN is able to directly take two-dimensional data as input rather than having to convert the matrix to a series of vectors as in conventional reconstruction based saliency detection methods. On the other hand, the training process of CNN is augmented with the initialization obtained by an unsupervised learning process of convolutional auto-encoder (CAE). By this way, our CNNR model can be trained on limited labeled data, with the weights of the CNN being meaningfully initialized by CAE instead of random initialization. Performance evaluations are conducted through comprehensive experiments on four benchmark datasets and the comparisons with eight state-of-the-art saliency detection models show that our proposed deep reconstruction model outperforms most of the eight state-of-the-art saliency detection models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.