Abstract

Self-localization in autonomous robots is one of the fundamental issues in the development of intelligent robots, and processing of raw sensory information into useful features is an integral part of this problem. In a typical scenario, there are several choices for the feature extraction algorithm, and each has its weaknesses and strengths depending on the characteristics of the environment. In this work, we introduce a localization algorithm that is capable of capturing the quality of a feature type based on the local environment and makes soft selection of feature types throughout different regions. A batch expectation–maximization algorithm is developed for both discrete and Monte Carlo localization models, exploiting the probabilistic pose estimations of the robot without requiring ground truth poses and also considering different observation types as blackbox algorithms. We tested our method in simulations, data collected from an indoor environment with a custom robot platform and a public data set. The results are compared with the individual feature types as well as naive fusion strategy.

Highlights

  • Robot navigation has an important place in the development of intelligent and autonomous robots

  • A feature type selection algorithm based on the local environment is developed for self-localization

  • The main contribution of this method is using more than one feature in a complementary way to increase overall robustness and localization accuracy

Read more

Summary

Introduction

Robot navigation has an important place in the development of intelligent and autonomous robots. The localization problem is the estimation of robot pose within the known map of the environment based on sensory observations. The first step is called prediction and it computes pðstjo1:tÀ1Þ, that is, the new belief state before integrating the most recent observation It follows from the definition of conditional probability given the previous state, stÀ1 as XN pðstjo1:tÀ1Þ 1⁄4 pðstjstÀ1 1⁄4 iÞpðstÀ1 1⁄4 ijo1:tÀ1Þ ð2Þ i where first term is the state transition model (assumed to be known) and the second term is the previous belief state, and the summation is defined over all possible values of the previous state.

1: Initialize a close to uniform distribution for all regions
Experiments and results
Conclusions
Declaration of conflicting interests
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call