Abstract

The purpose of this study is to assess the performance of a newly developed algorithm for visual-inertial navigation of robotic systems in GNSS-denied environments. In the proposed algorithm, the fusion of position and heading information from inertial sensors and a camera was tested in an indoor environment. The algorithm is based on a loosely-coupled extended Kalman filter (EKF) that integrates the position and attitude estimates from the three-dimensional reduced inertial sensor system (3D-RISS) and a visual odometry (VO) algorithm that processes consecutive camera frames. Three accelerometers, one gyroscope, and wheel odometers are used rather than a full inertial measurement unit (IMU) to minimize increasing gyroscope drifting errors. The VO based ego-motion estimation is achieved through multiple steps, namely (1) a feature detector that periodically runs to detect new features appearing in the camera's field of view, (2) optical flow applied in a pyramidal scheme (Lucas-Kanade algorithm) to track the displacement of recently detected features between consecutive camera frames, and (3) an estimation of the essential matrix to provide the platform's motion in the world frame. The decomposition of the essential matrix into rotation and translation parameters allows for transforming the motion into metric measurements. Since monocular VO is employed in this paper, the odometer provides the scale transformation parameter. The proposed method is examined in an indoor environment to evaluate the positioning performance on a teleoperated unmanned ground vehicle (UGV). Loop closure errors at the meter-level were obtained for trajectories of more than 250 meters extended over several minutes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call