Abstract

Bottom-up visual attention is an automatic behavior to guide visual perception to a conspicuous object in a scene. This paper develops a new object-based bottom-up attention (OBA) model for robots. This model includes four modules: Extraction of preattentive features, preattentive segmentation, estimation of space-based saliency, and estimation of proto-object-based saliency. In terms of computation, preattentive segmentation serves as a bridge to connect the space-based saliency and object-based saliency. This paper therefore proposes a preattentive segmentation algorithm, which is able to self-determine the number of proto-objects, has low computational cost, and is robust in a variety of conditions such as noise and spatial transformations. Experimental results have shown that the proposed OBA model outperforms space-based attention model and other object-based attention methods in terms of accuracy of attentional selection, consistency under a series of noise settings and object completion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call