Abstract

This paper presents a novel method by which an assistive aerial robot can learn the relevant camera views within a task domain through tracking the head motions of a human collaborator. The human’s visual field is modeled as an anisotropic spherical sensor, which decays in acuity towards the periphery, and is integrated in time throughout the domain. This data is resampled and fed into an expectation maximization solver in order to estimate the environment’s visual interest as a mixture of Gaussians. A dynamic coverage control law directs the robot to capture camera views of the peaks of these Gaussians which is broadcast to an augmented reality display worn by the human operator. An experimental study is presented that assesses the influence of the assitive robot on reflex time, head motion, and task completion time.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.