Abstract

Abstract. Indoor localization, navigation and mapping systems highly rely on the initial sensor pose information to achieve a high accuracy. Most existing indoor mapping and navigation systems cannot initialize the sensor poses automatically and consequently these systems cannot perform relocalization and recover from a pose estimation failure. For most indoor environments, a map or a 3D model is often available, and can provide useful information for relocalization. This paper presents a novel relocalization method for lidar sensors in indoor environments to estimate the initial lidar pose using a CNN pose regression network trained using a 3D model. A set of synthetic lidar frames are generated from the 3D model with known poses. Each lidar range image is a one-channel range image, used to train the CNN pose regression network from scratch to predict the initial sensor location and orientation. The CNN regression network trained by synthetic range images is used to estimate the poses of the lidar using real range images captured in the indoor environment. The results show that the proposed CNN regression network can learn from synthetic lidar data and estimate the pose of real lidar data with an accuracy of 1.9 m and 8.7 degrees.

Highlights

  • Lidar SLAM (Simultaneous Localization and Mapping) has been widely studied in recent decades for data collection and mapping in indoor environments

  • We propose a new lidar relocalization method based on an existing 3D model of the indoor environment which is often available or can be created from a floor plan

  • (2) We show that the Convolutional networks (CNN) regression network trained by synthetic range images generated from the 3D model can accurately estimate the pose of real range images captured by the lidar sensor in the indoor environment

Read more

Summary

INTRODUCTION

Lidar SLAM (Simultaneous Localization and Mapping) has been widely studied in recent decades for data collection and mapping in indoor environments. Conventional lidar relocalization methods estimate the pose by using a map previously captured by the sensor (Wang et al, 2017; Tian et al, 2019). This poses a practical challenge since the map generation in the first place depends on the relocalization ability to recover from possible failures. (2) We show that the CNN regression network trained by synthetic range images generated from the 3D model can accurately estimate the pose of real range images captured by the lidar sensor in the indoor environment

RELATED WORK
CNN Regression Network Architecture
Loss Function
Training Set and Test Set Generation
Training and Testing with Two CNN Regression Architectures
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call