Abstract

This paper presents an inertial sensor-based localization method using a deep neural network (DNN)-based velocity estimator. Among various sensors commonly used in localization, inertial sensors are less affected by the surrounding environment. The inertial sensor consists of an accelerometer and a gyroscope, which can estimate the poses of a robot. However, inertial sensors need to be used with other sensors for localization because inherent large drift errors are difficult to prevent. To overcome this problem, a DNN-based velocity estimator that reduces the position error by learning the data pattern of the inertial sensor and by limiting the range of the estimated velocity is proposed. The DNN-based velocity estimator comprises a convolutional neural network, a fully connected layer, and a smoothing filter. To limit the range of the estimated velocity, the range of velocity is divided by the total number of classes, and the divided range is assigned to each class. The relationship between consecutive classes is learned using a smoothing filter. Dataset-based experiments are performed to train the DNN-based velocity estimator and to evaluate the performance of the proposed localization. The experimental result shows that the proposed localization reduces the position errors by 99% compared to integration-based localization and by 91% compared to extended Kalman filter-based localization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call