Abstract

Visual Simultaneous Localization and Mapping (SLAM) based on RGB-D data has developed as a fundamental approach for robot perception over the past decades. There is an extensive literature regarding RGB-D SLAM and its applications. However, most of existing RGB-D SLAM methods assume that the traversed environments are static during the SLAM process. This is because moving objects in dynamic environments can severely degrade the SLAM performance. The static world assumption limits the applications of RGB-D SLAM in dynamic environments. In order to address this problem, we proposed a novel RGB-D data-based motion removal approach and integrated it into the front end of RGB-D SLAM. The motion removal approach acted as a pre-processing stage to filter out data that were associated with moving objects. We conducted experiments using a public RGB-D dataset. The results demonstrated that the proposed motion removal approach was able to effectively improve RGB-D SLAM in various challenging dynamic environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call