Abstract

Eye tracking devices have been extensively used to study human selection mechanisms and promoted the development of computational models of visual attention, whose well known outcomes are the saliency maps. Among the eye trackers, wearable ones have the advantages of allowing the estimation of the Point of Regard (POR) while performing natural tasks, instead of experimental, static lab settings. The motion of the viewer makes localization necessary to collect data in a coherent reference frame. In this work we present a framework for the estimation and mapping of the sequence of 3D PORs collected by a wearable device in unstructured, experimental settings. The result is a three-dimensional map of gazed objects, which we call 3D Saliency Map and constitutes the novel contribution of this work.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.