Abstract

Abstract. While lightweight stereo vision sensors provide detailed and high-resolution information that allows robust and accurate localization, the computation demands required for such process is doubled compared to monocular sensors. In this paper, an alternative model for pose estimation of stereo sensors is introduced which provides an efficient and precise framework for investigating system configurations and maximize pose accuracies. Using the proposed formulation, we examine the parameters that affect accurate pose estimation and their magnitudes and show that for standard operational altitudes of ∼50 m, a five-fold improvement in localization is reached, from ∼0.4–0.5 m with a single sensor to less than 0.1 m by taking advantage of the extended field of view from both cameras. Furthermore, such improvement is reached using cameras with reduced sensor size which are more affordable. Hence, a dual-camera setup improves not only the pose estimation but also enables to use smaller sensors and reduce the overall system cost. Our analysis shows that even a slight modification in camera directions improves the positional accuracy further and yield attitude angle as accurate as ±6’ (compared to ±20’). The proposed pose estimation method relieves computational demands of traditional bundle adjustment processes and is easily integrated with other inertial sensors.

Highlights

  • The capabilities and availability of small unmanned aircraft and platforms have seen a dramatic rise in recent years, with the quadcopters becoming an everyday mapping utility for professionals and amateurs alike (Barry et al, 2015)

  • The determination of image orientation and localization with respect to a pre-determined 3-D coordinate system is a standard photogrammetric task (Wang et al, 2019), which is often related to structure from motion (SfM) and simultaneous localization and mapping (SLAM) as well as visual odometry (VO)

  • To evaluate the sensor size implications on the derived pose estimates we consider an operational altitude of 50 m above ground and compare a stereo-setting to a monocular case

Read more

Summary

INTRODUCTION

The capabilities and availability of small unmanned aircraft and platforms have seen a dramatic rise in recent years, with the quadcopters becoming an everyday mapping utility for professionals and amateurs alike (Barry et al, 2015). Platform navigation often relies on GNSS and inertial sensors (accelerometers and gyros), but when the former is unavailable (e.g., indoor mapping or outages), and the latter is prone to drift, vision-based navigation offers the natural complement This is due to the fact that it produces a full six degrees of freedom (6DOF) motion estimate and has lower drift rates than all IMUs with the exception of the most expensive ones (Howard, 2008). The proposed formulation offers two main advantages over existing ones It allows to use features for pose estimation regardless of the number of cameras they are viewed. By or their distance from the platform, and it provides a computationally efficient parameter estimation by considering the relative orientation between the sensors as a single entity This allows reducing the computation demands of bundle adjustment processes and enables efficient integration with Kalman filtering for real-time applications

RELATED WORD
METHODOLOGY
Incorporation into a SLAM scheme
ANALYSIS
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call