Abstract

Point cloud sensor is currently used to sense point cloud data information which is 3D position on the surface of target object. Point cloud data information can be input to improve for 3D point of gaze estimation (3D POG). Presently, there are limitation on creating point cloud data information on target object since point cloud data cannot be found if any obstacle in front of the sensor, there are shadow projection. This paper proposes method of multipoint cloud data to create point cloud data on the surface of target object and obscured point cloud data in the shadow projection. Eye tracker sensor provides 3D eyes position data and 2D POG on screen data which each origin represents center of eye tracker and center of screen respectively. These mentioned data are integrated by model fitting to draw a straight line, originating from the center point between left pupil and right pupil, which passes through the2D POG on virtual screen and ends when the line meets the closest point on the target object. In performance evaluation of proposed method, firstly the obscured point cloud data are successfully defined. Secondly, experiment by 4 participants by watching 9 units of testing objects at 2 seconds in free move provide the result of 3D POG estimation at average distance errors by 1.09 cm.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.