Abstract

A new approach involving multi-scale recurrent convolutional neural network (RCNN) has been proposed for co-saliency object detection. The proposed approach involves careful separation of foreground and background superpixel regions from a single image taken from a related group of images in order to train an RCNN to extract the common salient object regions. The one-dimensional convolutional neural network (CNN) is trained using superpixels extracted from several multi-scaled images derived from a single image in every group. The output of the CNN is fed into the recurrent neural network to classify the common object superpixel properties from the remaining images. The superpixel feature training-based RCNN approach addresses two challenges: It requires a small training dataset of about 38 representative images. Further, the use of 1-dimensional superpixel features to train the RCNN results in faster training. The proposed approach delivers accurate identification and segmentation of the common salient object from an image group even under extreme background conditions and object pose variations. The approach has been extensively evaluated using public domain datasets, such as imagepair, iCoseg-sub and iCoseg. The proposed approach delivers higher accuracy, F-measure and lower mean absolute error compared to several state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call