Abstract

A machine learning-based monocular gaze tracking method for mobile devices is proposed. A non-invasive, convenient, and low-cost gaze tracking framework is developed using our constructed convolutional neural network. This framework is applied to the 3D motion control of quadrotors, which can convert the operator’s gaze attention into control intention for the quadrotor, thus allowing the operator to control the quadrotor to complete flight tasks through visual interaction. Extensive challenging indoor and outdoor real-world experiments and benchmark comparisons validate that the proposed system is robust and effective, even for unskilled operators. The proposed method can improve the smoothness and reasonableness of the motion trajectory of the quadrotor, make it more consistent with the operator’s control intention, and introduce diversity, convenience, and intuition into the control of the quadrotor. We released the source code3 3 https://github.com/hujavahui/Gaze_MAV of our system to benefit related research.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call