Abstract

Detailed processing of sensory information is a computationally demanding task. This is especially true for vision, where the amount of information provided by the sensors typically exceeds the processing capacity of the system. Rather than attempting to process all the sensory data simultaneously, an effective strategy is to focus on subregions of the input space, shifting from one, subregion to the other, in a serial fashion. This strategy is commonly referred to as selective attention. We present a neuromorphic active-vision system, that implements a saliency-based model of selective attention. Visual data is sensed and preprocessed in parallel by a transient imager chip and transmitted to a selective-attention chip. This chip sequentially selects the spatial locations of salient regions in the vision sensor's field of view. A host computer uses the output of the selective-attention chip to drive the motors on which the imager is mounted, and to orient it toward the selected regions. The system's design framework is modular and allows the integration of multiple sensors and multiple selective-attention chips. We present experimental results showing the performance of a two-chip system in response to well-controlled test stimuli and to natural stimuli.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.