Multi-sensor fusion technology has been extensively deployed for the autonomous navigation and control of robots. Theoretically, by increasing the number of sensors, more effective data on the surrounding environment and obstacle information can be obtained. However, in practical applications, a massive number of sensors will not only increase the cost of using sensors in the system but also add to the difficulty and cost of data processing, leading to unnecessary resource consumption. To solve this problem, first, the robot navigation area is divided according to the importance of the navigation area and the characteristics of sensor parameters. The navigation non-core area adopts the single lidar sensor, while the core area adopts the multi-sensor fusion of “lidar +” to reduce the number of sensors used in the fusion and improve the performance of multi-sensor fusion after the division. Then, in consideration of the higher requirement for computing capacity of the device imposed by deep learning, the multi-sensor fusion algorithm is introduced into the dynamic load regulation mechanism to solve the problem from the algorithm level so that the outperformance of deep learning in feature extraction and other aspects can be utilized to reduce the performance requirements of the device and the overall power consumption of the system. Finally, by building the ROS experimental platform, it is verified that the robot can use autonomous navigation in complex and diversified home scenes.