Abstract

We apply a neurobiological model of visual attention and gaze control to the automatic animation of a photorealistic virtual human head. The attention model simulates biological visual processing along the occipito-parietal pathway of the primate brain. The gaze control model is derived from motion capture of human subjects, using high-speed video-based eye and head tracking apparatus. Given an arbitrary video clip, the model predicts visual locations most likely to attract an observer's attention, and simulates the dynamics of eye and head movements towards these locations. Tested on 85 video clips including synthetic stimuli, video games, TV news, sports, and outdoor scenes, the model demonstrates a strong ability at saccading towards and tracking salient targets. The resulting autonomous virtual human animation is of photorealistic quality

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.