Abstract

Robust and accurate pose estimation is crucial for many applications in mobile robotics. Extending visual Simultaneous Localization and Mapping (SLAM) with other modalities such as an inertial measurement unit (IMU) can boost robustness and accuracy. However, for a tight sensor fusion, accurate time synchronization of the sensors is often crucial. Changing exposure times, internal sensor filtering, multiple clock sources and unpredictable delays from operation system scheduling and data transfer can make sensor synchronization challenging. In this paper, we present VersaVIS, an Open Versatile Multi-Camera Visual-Inertial Sensor Suite aimed to be an efficient research platform for easy deployment, integration and extension for many mobile robotic applications. VersaVIS provides a complete, open-source hardware, firmware and software bundle to perform time synchronization of multiple cameras with an IMU featuring exposure compensation, host clock translation and independent and stereo camera triggering. The sensor suite supports a wide range of cameras and IMUs to match the requirements of the application. The synchronization accuracy of the framework is evaluated on multiple experiments achieving timing accuracy of less than . Furthermore, the applicability and versatility of the sensor suite is demonstrated in multiple applications including visual-inertial SLAM, multi-camera applications, multi-modal mapping, reconstruction and object based mapping.

Highlights

  • Autonomous mobile robots are well established in controlled environments such as factories where they rely on external infrastructure such as magnetic tape on the floor or beacons

  • There are frameworks such as VINS-Mono [4] that can estimate a time offset during estimation, convergence of the estimation can be improved by accurate timestamping, that is, assigning timestamps to the sensor data

  • The proposed sensor suite consists of three different parts, (i) the firmware which runs on the microcontroller unit (MCU), (ii) the host driver running on a ROS enabled machine, and (iii) the hardware trigger printed circuit board (PCB)

Read more

Summary

Introduction

Autonomous mobile robots are well established in controlled environments such as factories where they rely on external infrastructure such as magnetic tape on the floor or beacons. Using additional sensor modalities such as inertial measurement units (IMUs) [2,3,4] can improve robustness and accuracy for a wide range of applications. Proposes a synchronization framework for triggered sensors that does not require additional hardware This only works when all sensors are triggered simultaneously, rendering exposure compensation impossible (see Section 2.1.1). It is often impossible to add further extensions to those frameworks to enable fusion with other modalities such as Light Detection and Ranging sensor (LiDAR) sensors or external illumination. Different public datasets such as the KITTI dataset [15], the North Campus Long-Term (NCLT). Visual-Inertial Sensor Suite (VersaVIS) in multiple applications while Section 5 provides a conclusion with an outlook on future work

The Visual-Inertial Sensor Suite
Standard Cameras
Other Triggerable Sensors
Other Non-Triggerable Sensors
Synchronizer
IMU Receiver
VersaVIS Triggering Board
Camera-Camera
Camera-IMU
VersaVIS-Host
Visual-Inertial SLAM
Stereo Visual-Inertial Odometry on Rail Vehicle
Multi-Modal Mapping and Reconstruction
Object Based Mapping
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call