Abstract
In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more and more autonomous. In this context, the state estimation of the vehicle position is a fundamental necessity for any application involving autonomy. However, the problem of position estimation could not be solved in some scenarios, even when a GPS signal is available, for instance, an application requiring performing precision manoeuvres in a complex environment. Therefore, some additional sensory information should be integrated into the system in order to improve accuracy and robustness. In this work, a novel vision-based simultaneous localization and mapping (SLAM) method with application to unmanned aerial vehicles is proposed. One of the contributions of this work is to design and develop a novel technique for estimating features depth which is based on a stochastic technique of triangulation. In the proposed method the camera is mounted over a servo-controlled gimbal that counteracts the changes in attitude of the quadcopter. Due to the above assumption, the overall problem is simplified and it is focused on the position estimation of the aerial vehicle. Also, the tracking process of visual features is made easier due to the stabilized video. Another contribution of this work is to demonstrate that the integration of very noisy GPS measurements into the system for an initial short period of time is enough to initialize the metric scale. The performance of this proposed method is validated by means of experiments with real data carried out in unstructured outdoor environments. A comparative study shows that, when compared with related methods, the proposed approach performs better in terms of accuracy and computational time.
Highlights
There are still important problems to be solved in autonomous robotics, and simultaneous localization and mapping (SLAM) is one of them
Some additional sensory information should be integrated into the system in order to improve accuracy and robustness
A monocular camera is integrated into an Unmanned Aerial Vehicles (UAVs) in order to provide visual information of the ground
Summary
There are still important problems to be solved in autonomous robotics, and simultaneous localization and mapping (SLAM) is one of them. Regarding the term SLAM, it is used to refer to a map building process in an unknown space and the use of this map to navigate through such an space tracking the position in a simultaneous process. This map is PLOS ONE | DOI:10.1371/journal.pone.0167197. Many different kinds of sensors can be used for implementing SLAM systems, for instance, laser ([3,4,5]), sonar ([6,7,8]), sound sensors ([9, 10]), RFID ([11, 12]) or computer vision ([13,14,15]). The selection of such a sensor technology has a great impact on the algorithm used in SLAM and, depending on the application and other factors, each technology has some strong and weak points
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.