Abstract

Salient object detection is the process of locating prominent objects in an image. In this field, deep learning methods are providing outstanding results. One way of finding salient objects is to first obtain a bounding box for the prominent object in the image and then use the bounding box to form the actual shape of the salient object. In this work, we find an object bounding box using YOLOv2 network. Next, we apply boundary correction to the bounding box predicted by the deep network. In the third step, we segment the image using a set of Gabor filters. Then, we select the matching segment from the first-level boundary correction. On the matching segment, we apply second-level boundary correction. Usually, in salient object detection, the end-user plays no role in selecting the salient object. In this work, we provide the user with a choice to improvise on the salient object detected at the first level. If the user is not satisfied with first-level boundary correction, he/she can choose for second-level boundary correction. The method provides a benefit over the existing methods as most of the saliency map results are static, and pure deep learning methods have blurred edges. By using this procedure, neat object edges are obtained. The algorithm is tested on three datasets against four state-of-the-art methods. The algorithm is evaluated based on F-measure. The proposed model achieves 0.86, 0.7904, and 0.745 F-measure for ASD, ECSSD, and PASCAL-S dataset, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call