Abstract

Estimating the position and orientation (pose) of a moving platform in a three-dimensional (3D) environment is of significant importance in many areas, such as robotics and sensing. In order to perform this task, one can employ single or multiple sensors. Multi-sensor fusion has been used to improve the accuracy of the estimation and to compensate for individual sensor deficiencies. Unlike the previous works in this area that use sensors with the ability of 3D localization to estimate the full pose of a platform (such as an unmanned aerial vehicle or drone), in this work we employ the data from a 2D light detection and ranging (LiDAR) sensor, which can only estimate the pose in a 2D plane. We fuse it in an extended Kalman filter with the data from camera and inertial sensors showing that, despite the incomplete estimation from the 2D LiDAR, the overall estimated 3D pose can be improved. We also compare this scenario with the case where the 2D LiDAR is replaced with a 3D LiDAR with similar characteristics, but the ability of complete 3D pose estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call