Abstract

Humans and many animals can selectively sample necessary part of the visual scene to carry out daily activities like foraging and finding prey or mates. Selective attention allows them to efficiently use the limited resources of the brain by deploying sensory apparatus to collect data believed to be pertinent to the organisms current situation. Robots operating in dynamic environments are similarly exposed to a wide variety of stimuli, which they must process with limited sensory and computational resources. Computational saliency models inspired by biological studies have previously been used in robotic applications, but these had limited capacity to deal with dynamic environments and have no capacity to reason about uncertainty when planning their sensor placement strategy. This paper generalises the traditional model of saliency by using a Kalman filter estimator to describe an agent’s understanding of the world. The resulting modelling of uncertainty allows the agents to adopt a richer set of strategies to deploy sensory apparatus than is possible with the winner-take-all mechanism of the traditional saliency model. This paper demonstrates the use of three utility functions that are used to encapsulate the perceptual state that is valued by the agent. Each utility function thereby produces a distinct sensory deployment behaviour.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call