Abstract

We introduce an approach to accelerate low-level vision in robotics applications, including its formalisms and algorithms. We depict in detail image the processing and computer vision techniques that provide data reduction and feature abstraction from input data, also including algorithms and implementations done in a real robot platform. Our model shows to be helpful in the development of behaviorally active mechanisms for integration of multi-modal sensory features. In the current version, the algorithm allows our system to achieve real-time processing running in a conventional 2.0 GHz Intel processor. This processing rate allows our robotics platform to perform tasks involving control of attention, as the tracking of objects, and recognition. This proposed solution support complex, behaviorally cooperative, active sensory systems as well as different types of tasks including bottom-up and top-down aspects of attention control. Besides being more general, we used features from visual data here to validate the proposed sketch. Our final goal is to develop an active, real-time running vision system able to select regions of interest in its surround and to foveate (verge) robotic cameras on the selected regions, as necessary. This can be performed physically or by software only (by moving the fovea region inside a view of a scene). Our system is also able to keep attention on the same region as necessary, for example, to recognize or manipulate an object, and to eventually shift its focus of attention to another region as a task has been finished. A nice contribution done over our approach to feature reduction and abstraction is the construction of a moving fovea implemented in software that can be used in situations where avoiding to move the robot resources (cameras) works better. On the top of our model, based on reduced data and on a current functional state of the robot, attention strategies could be further developed to decide, on-line, where is the most relevant place to pay attention. Recognition tasks could also be successfully done based on the features in this perceptual buffer. These tasks in conjunction with tracking experiments, including motion calculation, validate the proposed model and its use for data reduction and abstraction of features. As a result, the robot can use this low level module to make control decisions, based on the information contained in its perceptual state and on the current task being executed, selecting the right actions in response to environmental stimuli. 19

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.