Abstract

Interest on using mobile autonomous agents has been growing (Weiss, G., 2000), (K. Kitano; Asada, M.; Kuniyoshi, Y.; Noda, I. & Osawa E., 1997) due to their capacity to gather information on their operating environment in diverse situations, from rescue to demining and security. In many of these applications, the environments are inherently unstructured and dynamic, and the agents depend mostly on visual information to perceive and interact with the environment. In this scope, computer vision in a broad sense can be considered as the key technology for deploying systems with an higher degree of autonomy, since it is the basis for activities like object recognition, navigation and object tracking. Gathering information from such type of environments through visual perception is an extremely processor-demanding activity with hard to predict execution times (Davison, J., 2005). To further complicate the situation many of the activities carried out by the mobile agents are subject to real-time requirements with different levels of criticality, importance and dynamics. For instance, the capability to timely detect obstacles near the agent is a hard activity, since failures can result in injured people or damaged equipment, while activities like self-localization, although important for the agent performance, are inherently soft since extra delays in these activities simply cause performance degradation. Therefore, the capability to timely process the image at rates high enough to allow visual-guided control or decision-making, called real-time computer vision (RTCV) (Blake, A; Curwen, R. & Zisserman, A., 1993), plays a crucial role in the performance of mobile autonomous agents operating in open and dynamic environments. This chapter describes a new architectural solution for the vision subsystem of mobile autonomous agents that substantially improves its reactivity by dynamically assigning computational resources to the most important tasks. The vision-processing activities are broken into separated elementary real-time tasks, which are then associated with adequate real-time properties (e.g. priority, activation rate, precedence constraints). This separation allows avoiding the blocking of higher priority tasks by lower priority ones as well as to set independent activation rates, related with the dynamics of the features or objects being processed, together with offsets that de-phase the activation instants of the tasks to further

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.