Abstract

Visual homing enables an autonomous robot to move to a target (home) position using only visual information. While 2D visual homing has been widely studied, homing in 3D space still requires much attention. This paper presents a novel 3D visual homing method which can be applied to commodity Unmanned Aerial Vehicles (UAVs). Firstly, relative camera poses are estimated through feature correspondences between current views and the reference home image. Then homing vectors are computed and utilized to guide the UAV toward the 3D home location. All computations can be performed in real-time on mobile devices through a mobile app. To validate our approach, we conducted quantitative evaluations on the most popular image sequence datasets and performed real experiments on a quadcopter (i.e., DJI Mavic Pro). Experimental results demonstrate the effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call