Abstract

Actively exploring unknown indoor environments using an RGB-D camera is a challenging task, especially when utilizing the feature-based visual simultaneous localization and mapping (vSLAM) methods. The existence of low-texture scenes, such as narrow corridors and white walls, may lead to frequent tracking failure of indoor features. To avoid tracking failure, while simultaneously ensuring adequate exploration, a novel active vSLAM framework with camera view planning based on generative adversarial imitation learning (GAIL) is proposed to actively adjust the orientation of the camera during robot motion. First, the Oriented FAST and Rotated BRIEF-SLAM2 (ORB-SLAM2) method is modified to reconstruct a three-channel navigation map that contains information about obstacles and explored areas. Second, a large number of view planning behaviors of human beings are collected in different indoor environments as the expert demonstration. Last, to make the robot imitate the human searching behaviors, the structures of the actor, critic, and discriminator networks are designed, and the GAIL method is used to train the camera view planning policy. Simulation results on a public dataset of indoor environments show that the proposed GAIL-SLAM framework improves the exploration coverage ratio of unknown environments by an average of 53.08% (34.21% for traditional vs. 87.29% for our proposed). Meanwhile, the number of effective exploration steps before the occurrence of tracking failure increases by 405% (73 for traditional vs. 369 for our proposed) on average, indicating that the rate of tracking failure is effectively reduced.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call