Abstract

Recently, there has been increasing interest in studying the task coordination of aerial and ground robots. When a robot begins navigation in an unknown area, it has no information about the surrounding environment. Accordingly, for robots to perform tasks based on location information, they need a simultaneous localization and mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the current location of the robot on the map. This paper aims to present a localization method based in cooperation between aerial and ground robots in an indoor environment. The proposed method allows a ground robot to reach accurate destination by using a 2.5D elevation map built by a low-cost RGB-D (Red Green and Blue-Depth) sensor and 2D Laser sensor attached onto an aerial robot. A 2.5D elevation map is formed by projecting height information of an obstacle using depth information obtained by the RGB-D sensor onto a grid map, which is generated by using the 2D Laser sensor and scan matching. Experimental results demonstrate the effectiveness of the proposed method for its accuracy in location recognition and computing speed.

Highlights

  • When a robot begins navigation in an unknown area, it has no information about the surrounding environment

  • For robots to perform tasks based on location information, they need a Simultaneous Localization and Mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the robot’s current location on the map

  • We proposed a method for accurate map generation and real-time location recognition of a mobile robot in indoor environments by combining the advantages of 3D map and 2D map through cooperation between UAV and UGV

Read more

Summary

Introduction

When a robot begins navigation in an unknown area, it has no information about the surrounding environment. For robots to perform tasks based on location information, they need a Simultaneous Localization and Mapping (SLAM) process that uses sensor information to draw a map of the environment, while simultaneously estimating the robot’s current location on the map. Recently commercialized cleaning robots featuring on-board location recognition functions establish the location of the device as a reference point when it is turned on. The robot draws a map with the information collected from built-in sensors (e.g., vision sensor, distance sensor), while simultaneously estimating its own location in real-time to distinguish between areas that have been cleaned and areas that need to be cleaned. Distance sensors typically include laser scanners, infrared scanners, ultrasonic sensors, LIDAR, and RADAR, whereas vision sensors include stereo cameras, mono cameras, omnidirectional cameras, and Kinect [1,2,3,4]

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call