Abstract

This paper deals with sensor fusion of magnetic, angular rate and gravity sensor (MARG). The main contribution of this paper is the sensor fusion performed by supervised learning, which means parallel processing of the different kinds of measured data and estimating the position in periodic and non-periodic cases. During the learning phase, the position estimated by sensor fusion is compared with position data of a motion capture system. The main challenge is avoiding the error caused by the implicit integral calculation of MARG. There are several filter based signal processing methods for disturbance and noise estimation, which are calculated for each sensor separately. These classical methods can be used for disturbance and noise reduction and extracting hidden information from it as well. This paper examines the different types of noises and proposes a machine learning-based method for calculation of position and orientation directly from nine separate sensors. This method includes the disturbance and noise reduction in addition to sensor fusion. The proposed method was validated by experiments which provided promising results on periodic and translational motion as well.

Highlights

  • Estimation of the orientation and position of a robot is essential for robot navigation.The navigation system is an important part of an autonomous mobile robot

  • On the other hand image processing or 3D point, cloud-based applications have a high demand for computational capacity to perform simultaneous localization and mapping (SLAM) [11]

  • Inside the simulations we tested the effect of different noise types and different neural network layer types on controlled sinusoidal functions

Read more

Summary

Introduction

Estimation of the orientation and position of a robot is essential for robot navigation.The navigation system is an important part of an autonomous mobile robot. The concept of iSpace is based on the idea that computationally expensive algorithms can be outsourced from the onboard computer of the robot into the environment of the robot [9] In this setup, there is no need for a powerful computer with high computation capacity onboard [10]. On the other hand image processing or 3D point, cloud-based applications have a high demand for computational capacity to perform simultaneous localization and mapping (SLAM) [11]. These methods are not suitable for small robots with limited computational capacity

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call