Abstract

This paper presents the novel paradigm of a global localization method motivated by human visual systems (HVSs). HVSs actively use the information of the object recognition results for self-position localization and for viewing direction. The proposed localization paradigm consisted of three parts: panoramic image acquisition, multiple object recognition, and grid-based localization. Multiple object recognition information from panoramic images is utilized in the localization part. High-level object information was useful not only for global localization, but also for robot-object interactions. The metric global localization (position, viewing direction) was conducted based on the bearing information of recognized objects from just one panoramic image. The feasibility of the novel localization paradigm was validated experimentally.

Highlights

  • In the near future, service robots, such as care robots, education robots, and home robots, will be all around us

  • Multiple object recognition is performed from a panoramic image, and mobile robot localization is conducted using the bearing information of objects

  • In a common-frame constellation model (CFCM), each part is represented in terms of a pose and appearance index to the shared feature libraries that were constructed by k-means clustering

Read more

Summary

Introduction

Service robots, such as care robots, education robots, and home robots, will be all around us. The location estimation error can be large if they use only one frame Those approaches cannot provide highlevel information for robot-object interactions. The previous feature point matching-based metric localization paradigm can provide accurate location information using multiple frames or stereo images. Multiple object recognition is performed from a panoramic image, and mobile robot localization is conducted using the bearing information of objects This paradigm can estimate both the spatial position and viewing direction using only one image. Object DB + object map Figure 3: Proposed novel paradigm of localization using high-level object information: given the object database and object-based map, robot location is estimated through the object recognition module and bearing measurement-based localization module

High-Level Localization Paradigm
Panoramic Image Acquisition
Multiple Object Recognition
Grid-Based Global Localization
Scale4 6
Experimental Results
Conclusions and Discussion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call