Abstract

Multi-sensor integrated navigation technology has been applied to the indoor navigation and positioning of robots. For the problems of a low navigation accuracy and error accumulation, for mobile robots with a single sensor, an indoor mobile robot positioning method based on a visual and inertial sensor combination is presented in this paper. First, the visual sensor (Kinect) is used to obtain the color image and the depth image, and feature matching is performed by the improved scale-invariant feature transform (SIFT) algorithm. Then, the absolute orientation algorithm is used to calculate the rotation matrix and translation vector of a robot in two consecutive frames of images. An inertial measurement unit (IMU) has the advantages of high frequency updating and rapid, accurate positioning, and can compensate for the Kinect speed and lack of precision. Three-dimensional data, such as acceleration, angular velocity, magnetic field strength, and temperature data, can be obtained in real-time with an IMU. The data obtained by the visual sensor is loosely combined with that obtained by the IMU, that is, the differences in the positions and attitudes of the two sensor outputs are optimally combined by the adaptive fade-out extended Kalman filter to estimate the errors. Finally, several experiments show that this method can significantly improve the accuracy of the indoor positioning of the mobile robots based on the visual and inertial sensors.

Highlights

  • With the continuous expansion of the robot field, service robots have begun to appear in recent years, mainly engaged in maintenance, repair, transportation, cleaning, security, and other work.The prerequisite for realizing the use of these robots is to provide them with the current surrounding three-dimensional information, and, at the same time, be able to accurately determine their position.traditional GPS positioning technology is limited due to the diversity and uncertainty of the working environment of the robots, for example, using GPS, it is difficult to locate among very high buildings, deep underwater, or indoors

  • A laser radar was often used as a sensor to collect the surrounding information and positioning, that is, laser SLAM, but the price of the laser radar is high

  • After using the improved scale-invariant feature transform (SIFT) algorithm and the random sampling consistency (RANSAC) algorithm, the three-dimensional coordinates of the feature points captured by the Kinect camera at the first position are marked as Data1, and the three-dimensional coordinates of the feature points obtained at the second position are Data2; Data1, Data2, Depth1, and Depth2 are processed by the bubble sorting method and averaged

Read more

Summary

Introduction

With the continuous expansion of the robot field, service robots have begun to appear in recent years, mainly engaged in maintenance, repair, transportation, cleaning, security, and other work. In order to realize both mapping and self-positioning, the robots carrying several sensors have to obtain the surrounding information, and establish the environment model without prior information while in motion; at the same time, they have to estimate their own motion trajectories (simultaneous localization and mapping, SLAM) [1]. Kinect has limitations in speed and accuracy when it is used as a visual sensor to collect the surrounding data [3,4,5,6]. IMU has a high-frequency update and performance in real-time. It can overcome the speed limitations of Kinect, but has the problem of cumulative errors. Several experiments on the visual sensor with the IMU are performed

Principle of Kinect Camera
Getting Data
SIFT Algorithm
Establishment of Scale Space
Determine the Key Point Direction
Color Invariants
Image Color Invariant Preprocessing
Improvement of SIFT Feature Descriptors
Absolute
Kinect-Based Robot Self-Positioning Indoors
Strapdown Inertial Navigation Principle and Algorithm Design
Adaptive Fading Extended Kalman Filter
Kinect and IMU
Time Synchronization
SIFT Algorithm Verification Experiments
Kalman
Thethe size of thepreset map isnavigation
Combined Positioning Experiment
Straight Line Experiment
Elliptical Motion Experiment
Polygon Trajectory
Findings
10. Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call