Abstract
Visual Simultaneous Localization and Mapping (SLAM) based on RGB-D data has developed as a fundamental approach for robot perception over the past decades. There is an extensive literature regarding RGB-D SLAM and its applications. However, most of existing RGB-D SLAM methods assume that the traversed environments are static during the SLAM process. This is because moving objects in dynamic environments can severely degrade the SLAM performance. The static world assumption limits the applications of RGB-D SLAM in dynamic environments. In order to address this problem, we proposed a novel RGB-D data-based motion removal approach and integrated it into the front end of RGB-D SLAM. The motion removal approach acted as a pre-processing stage to filter out data that were associated with moving objects. We conducted experiments using a public RGB-D dataset. The results demonstrated that the proposed motion removal approach was able to effectively improve RGB-D SLAM in various challenging dynamic environments.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.