Abstract

Eye-trackers are expected to be used in portable daily-use devices. However, it must register object information and define a unified coordinate system in advance for human--computer interaction and quantitative analysis. Therefore, we propose a semantic 3D gaze mapping to collect gaze information from multiple people on the unified map and detect focused objects automatically. The semantic 3D map can be reconstructed using keyframe-based semantic segmentation and structure-from-motion, and the 3D point-of-gaze can also be computed on the map. We confirmed that the fixation time of the focused object can be calculated through an experiment without prior information.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.