Abstract

Abstract. This paper presents a concept and first experiments on a keyframe-based incremental bundle adjustment for real-time structure and motion estimation in an unknown scene. In order to avoid periodic batch steps, we use the software iSAM2 for sparse nonlinear incremental optimization, which is highly efficient through incremental variable reordering and fluid relinearization. We adapted the software to allow for (1) multi-view cameras by taking the rigid transformation between the cameras into account, (2) omnidirectional cameras as it can handle arbitrary bundles of rays and (3) scene points at infinity, which improve the estimation of the camera orientation as points at the horizon can be observed over long periods of time. The real-time bundle adjustment refers to sets of keyframes, consisting of frames, one per camera, taken in a synchronized way, that are initiated if a minimal geometric distance to the last keyframe set is exceeded. It uses interest points in the keyframes as observations, which are tracked in the synchronized video streams of the individual cameras and matched across the cameras, if possible. First experiments show the potential of the incremental bundle adjustment w.r.t. time requirements. Our experiments are based on a multi-camera system with four fisheye cameras, which are mounted on a UAV as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view.

Highlights

  • The presented work is part of the visual odometry package within the DFG-project Mapping on Demand (MoD) at the University of Bonn and the Technical University of Munich, in which we use a lightweight autonomously navigating unmanned aerial vehicle (UAV)

  • In this project we use a quadrocopter, equipped with a GPS unit, an IMU, ultra sonic sensors, a 3D laser scanner, a high resolution camera and four fisheye cameras, which are mounted as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view, see Fig. 1

  • In this paper we focus on our concept for the visual odometry with a multi-view camera system consisting of omnidirectional cameras using the iSAM2 algorithm for a keyframe-based incremental real-time bundle adjustment

Read more

Summary

Introduction

The presented work is part of the visual odometry package within the DFG-project Mapping on Demand (MoD) at the University of Bonn and the Technical University of Munich, in which we use a lightweight autonomously navigating unmanned aerial vehicle (UAV). The on-board sensing of lightweight UAVs has to be designed with regards to their limitations in size and weight, and limited on-board processing power requires highly efficient algorithms. In this project we use a quadrocopter, equipped with a GPS unit, an IMU, ultra sonic sensors, a 3D laser scanner, a high resolution camera and four fisheye cameras, which are mounted as two stereo pairs, one looking ahead and one looking backwards, providing a large field of view, see Fig. 1. The two stereo cameras are used (a) besides the ultra sonic sensors and the laser scanner for obstacle perception in the environment for autonomous navigation and (b) besides the GPS-unit and IMU for ego-motion estimation. The goal is to use the on-board processed ego-motion as an initial estimate for the orientation of the images of the high resolution camera, taking images with about 1 Hz, for near realtime semantic surface reconstruction on a ground station

Objectives
Methods
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call