Abstract

Powerful data reduction and selection processes, such as selective attention mechanisms and space-variant sensing in humans, can provide great advantages for developing effective real-time robot vision systems. The use of such processes should be closely coupled with motor capabilities, in order to actively interact with the environment. In this paper, an anthropomorphic vision system architecture integrating retina-like sensing, hierarchical structures and selective attention mechanisms is proposed. Direction of gaze is shifted based on both the sensory and semantic characteristics of the visual input, so that a task-dependent attentive behavior is produced. The sensory features currently included in the system are related to optical flow invariants, thus providing the system with motion detection capabilities. A neural network architecture for visual recognition is also included, which produces semantic-driven gaze shifts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call