Abstract

Abstract Aiming at the difficulty in ensuring accuracy and performance of existing Simultaneous Localization And Mapping (SLAM) technology based on multi-sensor fusion in complex dynamic environments, a multi-sensor adaptive fusion SLAM framework based on degradation detection and deep reinforcement learning (ASLAM-FD) is proposed. This framework can achieve adaptive collaborative precise adjustment of fusion weights (FWs) based on the real-time self-degradation states and relative degradation states quantified by each sensor, and can adapt to different tightly coupled SLAM algorithms based on FWs. In addition, within this framework, the continuous quantification models for the degradation states of internal/external sensors with certain versatility (we refer to them as EX-DM and IN-DM) is proposed. These quantitative models can achieve continuous quantification of degradation states of various mainstream internal/external sensors. Based on the above sensor degradation state quantification models, this paper further proposes a deep reinforcement learning (DRL) network suitable for adaptive collaborative adjustment of FWs. This network focuses more on the temporal nature of sensor observation data and degradation states, making it more suitable for modeling relationships between data with temporal features. In the experimental section, we adapted the proposed ASLAM-FD to different multi-sensor fusion SLAM algorithms on multiple datasets, and compared it with multiple advanced fusion SLAM algorithms. We found that adapting ASLAM-FD can effectively improve the accuracy and performance of fusion SLAM in complex dynamic environments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.