Abstract Aiming at the difficulty in ensuring accuracy and performance of existing Simultaneous Localization And Mapping (SLAM) technology based on multi-sensor fusion in complex dynamic environments, a multi-sensor adaptive fusion SLAM framework based on degradation detection and deep reinforcement learning (ASLAM-FD) is proposed. This framework can achieve adaptive collaborative precise adjustment of fusion weights (FWs) based on the real-time self-degradation states and relative degradation states quantified by each sensor, and can adapt to different tightly coupled SLAM algorithms based on FWs. In addition, within this framework, the continuous quantification models for the degradation states of internal/external sensors with certain versatility (we refer to them as EX-DM and IN-DM) is proposed. These quantitative models can achieve continuous quantification of degradation states of various mainstream internal/external sensors. Based on the above sensor degradation state quantification models, this paper further proposes a deep reinforcement learning (DRL) network suitable for adaptive collaborative adjustment of FWs. This network focuses more on the temporal nature of sensor observation data and degradation states, making it more suitable for modeling relationships between data with temporal features. In the experimental section, we adapted the proposed ASLAM-FD to different multi-sensor fusion SLAM algorithms on multiple datasets, and compared it with multiple advanced fusion SLAM algorithms. We found that adapting ASLAM-FD can effectively improve the accuracy and performance of fusion SLAM in complex dynamic environments.