Abstract

AbstractWe describe our model for synthetic vision‐based perceptual attention for autonomous agents in augmented reality (AR) environments. Since virtual and physical objects coexist in their environment, such agents must adaptively perceive and attend to objects relevant to their goals. To enable agents to perceive their surroundings, our approach allows the agents to determine currently visible objects from the scene description of what virtual and physical objects are configured in the camera's viewing area. In our model, a degree of attention is assigned to each perceived object based on its similarity to target objects related to an agent's goals. The agent can thus focus on a reduced set of perceived objects with respect to the estimated degree of attention. Moreover, by continuously and smartly updating the perceptual memory, it eliminates the processing loads associated to previously observed objects. To demonstrate the effectiveness of our approach, we implemented an animated character that was overlaid over a miniature version of campus in real‐time and that attended to building blocks relevant to given tasks. Experiments showed that our model could reduce a character's perceptual load at any time, even when surroundings change. Copyright © 2010 John Wiley & Sons, Ltd.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.