Abstract

The object-based attention theory has shown that perception processes only select relevant objects of the world which are then represented for action. Thus this paper proposes a novel computational method of robotic visual perception based on the object-based attention mechanism. It involves three modules: pre-attentive processing, attentional selection and perception learning. Visual scene is firstly segmented into discrete proto-objects pre-attentively and the gist of scene is identified as well. The attentional selection module simulates two types of modulation: bottom-up competition and top-down biasing. Bottom-up competition is evaluated by center-surround contrast; Given the task or scene category, the task-relevant object and a task-relevant feature of it is determined based on perception control rules and then used to evaluate top-down biasing. Following attentional selection, the attended object is put into perception learning module to update the existing object representations and perception control rules in long-term memory. An object representation consisting of between-object and within-object codings is built using probabilistic neural networks. An association memory using Bayesian network is also built to model perception control rules. Two types of robotic tasks are used to test this proposed model: task-specific object detection and landmark detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.