Abstract

The present paper describes a vision-based simultaneous localization and mapping system to be applied to Unmanned Aerial Vehicles (UAVs). The main contribution of this work is to propose a novel estimator relying on an Extended Kalman Filter. The estimator is designed in order to fuse the measurements obtained from: (i) an orientation sensor (AHRS); (ii) a position sensor (GPS); and (iii) a monocular camera. The estimated state consists of the full state of the vehicle: position and orientation and their first derivatives, as well as the location of the landmarks observed by the camera. The position sensor will be used only during the initialization period in order to recover the metric scale of the world. Afterwards, the estimated map of landmarks will be used to perform a fully vision-based navigation when the position sensor is not available. Experimental results obtained with simulations and real data show the benefits of the inclusion of camera measurements into the system. In this sense the estimation of the trajectory of the vehicle is considerably improved, compared with the estimates obtained using only the measurements from the position sensor, which are commonly low-rated and highly noisy.

Highlights

  • In recent years, many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more autonomous

  • (6-DoF) of a vehicle is a fundamental necessity for any application involving autonomy. This problem is seemingly solved with on-board Global Positioning System (GPS) and Inertial Measurements Units (IMU) with their integrated version, the Inertial Navigation Systems (INS)

  • The proposed visual-based SLAM system will be using visual information, attitude, and position measurements in order to accurately estimate the full state of the vehicle as well as the position of the landmarks observed by the camera

Read more

Summary

Introduction

Many researchers have addressed the issue of making Unmanned Aerial Vehicles (UAVs) more autonomous. Cameras appear as a good option to be used in SLAM systems applied to UAVs. In this work, the authors propose the use of a monocular camera looking downwards, integrated into the aerial vehicle, in order to provide visual information of the ground. The authors propose the use of a monocular camera looking downwards, integrated into the aerial vehicle, in order to provide visual information of the ground With such information, the proposed visual-based SLAM system will be using visual information, attitude, and position measurements in order to accurately estimate the full state of the vehicle as well as the position of the landmarks observed by the camera.

Assumptions
Monocular Camera
Attitude and Heading Reference System
Sensor Fusion Approach
Problem Description
Prediction
Visual Aid
Detection of Candidate Points
Tracking of Candidate Points
Estimating Candidate Points Depth
Visual Updates and Map Management
Attitude and Position Updates
Experimental Results
Experiments with Simulations
Experiments with Real Data
Method
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.