Abstract

Mobile robots must be capable to obtain an accurate map of their surroundings to move within it. To detect different materials that might be undetectable to one sensor but not others it is necessary to construct at least a two-sensor fusion scheme. With this, it is possible to generate a 2D occupancy map in which glass obstacles are identified. An artificial neural network is used to fuse data from a tri-sensor (RealSense Stereo camera, 2D LiDAR, and Ultrasonic Sensors) setup capable of detecting glass and other materials typically found in indoor environments that may or may not be visible to traditional 2D LiDAR sensors, hence the expression improved LiDAR. A preprocessing scheme is implemented to filter all the outliers, project a 3D pointcloud to a 2D plane and adjust distance data. With a Neural Network as a data fusion algorithm, we integrate all the information into a single, more accurate distance-to-obstacle reading to finally generate a 2D Occupancy Grid Map (OGM) that considers all sensors information. The Robotis Turtlebot3 Waffle Pi robot is used as the experimental platform to conduct experiments given the different fusion strategies. Test results show that with such a fusion algorithm, it is possible to detect glass and other obstacles with an estimated root-mean-square error (RMSE) of 3 cm with multiple fusion strategies.

Highlights

  • Sensor data fusion is a crucial task to process information from a multiple sensor set up in a mobile robot [1]

  • Fusion strategies can be based on: probabilistic approach such as Factor graphs [3], extended Kalman Filters [4], Bayesian methods [5], and Particle Filters [6] or it can be based on artificial intelligence such as Neural Networks (NN) [7] or Fuzzy Logic [8,9]

  • Extended Kalman Filters [13] were used by Dobrev, Gulden, and Vossiek to improve an indoor positioning system using multi-modal sensor fusion for service robots’ applications

Read more

Summary

Introduction

Sensor data fusion is a crucial task to process information from a multiple sensor set up in a mobile robot [1]. Multiple distance measurement devices with different sensing technologies (2D Laser Imaging Detection and Ranging (LiDAR), Stereo Camera, and Ultrasonic Sensors) in the same robotic platform allow the detection of a wider range of obstacle types. Extended Kalman Filters [13] were used by Dobrev, Gulden, and Vossiek to improve an indoor positioning system using multi-modal sensor fusion for service robots’ applications This implementation requires a fixed in placed sensor infrastructure and laser scanner measurement errors were up to 30 cm in the presence of glass doors. The authors of [19] propose an approach for autonomous navigation of mobile robots in faulty situations where the main objective is to extend the fault tolerance strategy to simultaneous localization and mapping in presence of sensor or software faults in the data fusion process.

Materials and Methods
Ultrasonic Sensor
Stereo Camera
Data Fusion
Neural Network Configuration
Training and Running the Network
Results
Scenario 1
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call