Abstract

Unsteady locomotion and the dynamic environment are two problems that block humanoid robots to apply visual Simultaneous Localization and Mapping (SLAM) approaches. Humans are often considered as moving obstacles and targets in humanoid robots working space. Thus, in this paper, we propose a robust dense RGB-D SLAM approach for the humanoid robots working in the dynamic human environments. To deal with the dynamic human objects, a deep learning-based human detector is combined in the proposed method. After the removal of the dynamic object, we fast reconstruct the static environments through a dense RGB-D point clouds fusion framework. In addition to the humanoid robot falling problem, which usually results in visual sensing discontinuities, we propose a novel point clouds registration-based method to relocate the robot pose. Therefore, our robot can continue the self localization and mapping after the falling. Experimental results on both the public benchmarks and the real humanoid robot SLAM experiments indicated that the proposed approach outperformed state-of-the-art SLAM solutions in dynamic human environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call