Abstract

Mobile robots have to deal with an enormous amount of visual data containing static and dynamic stimuli. Depending on the task, only small portions of a scene are relevant. Artificial attention systems filter information at early stages. Among the various methods proposed to implement such systems, the region-based approach has proven to be robust and especially suited for integrating top-down influences. This concept was recently transferred to the spatiotemporal domain to obtain motion saliency. A full-featured integration of the spatial and spatiotemporal systems is presented here. We propose a biologically inspired two-stream system, which allows to use different spatial and temporal resolutions and to pick off spatiotemporal saliency at early stages. We compare the output to classic models and demonstrate the flexibility of the integrated approach in different experiments. These include online processing of continuous input, a task similar to thumbnail extraction and a top-down task of selecting specific moving and non-moving objects.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.