Abstract

Monocular vision-based 3D scene understanding has been an integral part of many machine vision applications. Always, the objective is to measure the depth using a single RGB camera, which is at par with the depth cameras. In this regard, monocular vision-guided autonomous navigation of robots is rapidly gaining popularity among the research community. We propose an effective monocular vision-assisted method to measure the depth of an Unmanned Aerial Vehicle (UAV) from an impending frontal obstacle. This is followed by collision-free navigation in unknown GPS-denied environments. Our approach deals upon the fundamental principle of perspective vision that the size of an object relative to its field of view (FoV) increases as the center of projection moves closer towards the object. Our contribution involves modeling the depth followed by its realization through scale-invariant SURF features. Noisy depth measurements arising due to external wind, or the turbulence in the UAV, are rectified by employing a constant velocity-based Kalman filter model. Necessary control commands are then designed based on the rectified depth value to avoid the obstacle before collision. Rigorous experiments with SURF scale-invariant features reveal an overall accuracy of 88.6% with varying obstacles, in both indoor and outdoor environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call