Multi-Robot Cooperative Simultaneous Localization and Mapping Algorithm Based on Sub-Graph Partitioning.
To address the challenges in multi-robot collaborative SLAM, including excessive redundant computations and low processing efficiency in candidate loop closure selection during front-end loop detection, as well as high computational complexity and long iteration times due to global pose optimization in the back-end, this paper introduces several key improvements. First, a global matching and candidate loop selection strategy is incorporated into the front-end loop detection module, leveraging both LiDAR point clouds and visual features to achieve cross-robot loop detection, effectively mitigating computational redundancy and reducing false matches in collaborative multi-robot systems. Second, an improved distributed robust pose graph optimization algorithm is proposed in the back-end module. By introducing a robust cost function to filter out erroneous loop closures and employing a subgraph optimization strategy during iterative optimization, the proposed approach enhances convergence speed and solution quality, thereby reducing uncertainty in multi-robot pose association. Experimental results demonstrate that the proposed method significantly improves computational efficiency and localization accuracy. Specifically, in front-end loop detection, the proposed algorithm achieves an F1-score improvement of approximately 8.5-51.5% compared to other methods. In back-end optimization, it outperforms traditional algorithms in terms of both convergence speed and optimization accuracy. In terms of localization accuracy, the proposed method achieves an improvement of approximately 32.8% over other open source algorithms.
- Research Article
5
- 10.3390/s21165522
- Aug 17, 2021
- Sensors
Currently, simultaneous localization and mapping (SLAM) is one of the main research topics in the robotics field. Visual-inertia SLAM, which consists of a camera and an inertial measurement unit (IMU), can significantly improve robustness and enable scale weak-visibility, whereas monocular visual SLAM is scale-invisible. For ground mobile robots, the introduction of a wheel speed sensor can solve the scale weak-visibility problem and improve robustness under abnormal conditions. In this paper, a multi-sensor fusion SLAM algorithm using monocular vision, inertia, and wheel speed measurements is proposed. The sensor measurements are combined in a tightly coupled manner, and a nonlinear optimization method is used to maximize the posterior probability to solve the optimal state estimation. Loop detection and back-end optimization are added to help reduce or even eliminate the cumulative error of the estimated poses, thus ensuring global consistency of the trajectory and map. The outstanding contribution of this paper is that the wheel odometer pre-integration algorithm, which combines the chassis speed and IMU angular speed, can avoid the repeated integration caused by linearization point changes during iterative optimization; state initialization based on the wheel odometer and IMU enables a quick and reliable calculation of the initial state values required by the state estimator in both stationary and moving states. Comparative experiments were conducted in room-scale scenes, building scale scenes, and visual loss scenarios. The results showed that the proposed algorithm is highly accurate—2.2 m of cumulative error after moving 812 m (0.28%, loopback optimization disabled)—robust, and has an effective localization capability even in the event of sensor loss, including visual loss. The accuracy and robustness of the proposed method are superior to those of monocular visual inertia SLAM and traditional wheel odometers.
- Research Article
8
- 10.1155/2023/8872822
- Feb 17, 2023
- Journal of Robotics
Mobile robots are widely used in medicine, agriculture, home furnishing, and industry. Simultaneous localization and mapping (SLAM) is the working basis of mobile robots, so it is extremely necessary and meaningful for making researches on SLAM technology. SLAM technology involves robot mechanism kinematics, logic, mathematics, perceptual detection, and other fields. However, it faces the problem of classifying the technical content, which leads to diverse technical frameworks of SLAM. Among all sorts of SLAM, visual SLAM (V-SLAM) has become the key academic research due to its advantages of low price, easy installation, and simple algorithm model. Firstly, we illustrate the superiority of V-SLAM by comparing it with other localization techniques. Secondly, we sort out some open-source V-SLAM algorithms and compare their real-time performance, robustness, and innovation. Then, we analyze the frameworks, mathematical models, and related basic theoretical knowledge of V-SLAM. Meanwhile, we review the related works from four aspects: visual odometry, back-end optimization, loop closure detection, and mapping. Finally, we prospect the future development trend and make a foundation for researchers to expand works in the future. All in all, this paper classifies each module of V-SLAM in detail and provides better readability to readers. This is undoubtedly the most comprehensive review of V-SLAM recently.
- Conference Article
2
- 10.1109/vtc2020-spring48590.2020.9129006
- May 1, 2020
In order to effectively solve the problem of loop closures detection matching speed and loop closures false positives in large-scale maps, this paper proposes a loop closures detection method combining geomagnetic sequence search and lidar point cloud matching. By adding the geomagnetic matching algorithm to the loop closures, the candidate loop detection pose-node set is filtered, which reduces the false detection caused by the high local similarity in the lidar Simultaneous Localization and Mapping (SLAM), as well as the loop false positives and mapping distortion caused by the interference of reflection and transmission of laser beam. In this work, the performance of the algorithm is verified by the lidar point cloud and geomagnetic signal dataset collected in the real environment. The experimental results show that compared with the current product-level SLAM system, Cartographer, the proposed algorithm improves the loop detection speed by 31% (in data of 100 matches) and the matching accuracy is improved by 10% at the 20% recall.
- Research Article
2
- 10.3390/electronics10212638
- Oct 28, 2021
- Electronics
Loop-closure detection is an essential means to reduce accumulated errors of simultaneous localization and mapping (SLAM) systems. However, even false positive loop closures could seriously interfere and even corrupt the back-end optimization process. For a collaborative SLAM system that generally uses both intra-robot and inter-robot loop closures to optimize the pose graph, it is a tough job to reject those false positive loop closures without a reliable a priori knowledge of the relative pose transformation between robots. Aiming at this solving problem, this paper proposes a two-stage false positive loop-closure rejection method based on three types of consistency checks. Firstly, a multi-robot pose-graph optimization model is given which transforms the multi-robot pose optimization problem into a maximum likelihood estimation model. Then, the principle of the false positive loop-closure rejection method based on χ2 test is proposed, in which clustering is used to reject those intra-robot false loop-closures in the first step, and a largest mutually consistent loop-based χ2 test is constructed to reject inter-robot false loop closures in the second step. Finally, an open dataset and synthetic data are used to evaluate the performance of the algorithms. The experimental results demonstrate that our method improves the accuracy and robustness of the back-end pose-graph optimization with a strong ability to reject false positive loop closures, and it is not sensitive to the initial pose at the same time. In the Computer Science and Artificial Intelligence Lab (CSAIL) dataset, the absolute position error is reduced by 55.37% compared to the dynamic scaling covariance method, and the absolute rotation error is reduced by 77.27%; in the city10,000 synthetic dataset, the absolute position error is reduced by 89.37% compared to the pairwise consistency maximization (PCM) and the absolute rotation error is reduced by 97.9%.
- Research Article
5
- 10.3390/s22239362
- Dec 1, 2022
- Sensors
SLAM (Simultaneous Localization and Mapping) is mainly composed of five parts: sensor data reading, front-end visual odometry, back-end optimization, loopback detection, and map building. And when visual SLAM is estimated by visual odometry only, cumulative drift will inevitably occur. Loopback detection is used in classical visual SLAM, and if loopback is not detected during operation, it is not possible to correct the positional trajectory using loopback. Therefore, to address the cumulative drift problem of visual SLAM, this paper adds Indoor Positioning System (IPS) to the back-end optimization of visual SLAM, and uses the two-label orientation method to estimate the heading angle of the mobile robot as the pose information, and outputs the pose information with position and heading angle. It is also added to the optimization as an absolute constraint. Global constraints are provided for the optimization of the positional trajectory. We conducted experiments on the AUTOLABOR mobile robot, and the experimental results show that the localization accuracy of the SLAM back-end optimization algorithm with fused IPS can be maintained between 0.02 m and 0.03 m, which meets the requirements of indoor localization, and there is no cumulative drift problem when there is no loopback detection, which solves the problem of cumulative drift of the visual SLAM system to some extent.
- Research Article
23
- 10.1016/j.ast.2019.105619
- Dec 6, 2019
- Aerospace Science and Technology
Real-time measurement and estimation of the 3D geometry and motion parameters for spatially unknown moving targets
- Supplementary Content
43
- 10.3390/s22124582
- Jun 17, 2022
- Sensors (Basel, Switzerland)
With the significant increase in demand for artificial intelligence, environmental map reconstruction has become a research hotspot for obstacle avoidance navigation, unmanned operations, and virtual reality. The quality of the map plays a vital role in positioning, path planning, and obstacle avoidance. This review starts with the development of SLAM (Simultaneous Localization and Mapping) and proceeds to a review of V-SLAM (Visual-SLAM) from its proposal to the present, with a summary of its historical milestones. In this context, the five parts of the classic V-SLAM framework—visual sensor, visual odometer, backend optimization, loop detection, and mapping—are explained separately. Meanwhile, the details of the latest methods are shown; VI-SLAM (Visual inertial SLAM) is reviewed and extended. The four critical techniques of V-SLAM and its technical difficulties are summarized as feature detection and matching, selection of keyframes, uncertainty technology, and expression of maps. Finally, the development direction and needs of the V-SLAM field are proposed.
- Research Article
2
- 10.1108/ir-07-2023-0145
- Sep 19, 2023
- Industrial Robot: the international journal of robotics research and application
PurposeThe light detection and ranging sensor has been widely deployed in the area of simultaneous localization and mapping (SLAM) for its remarkable accuracy, but obvious drift phenomenon and large accumulated error are inevitable when using SLAM. The purpose of this study is to alleviate the accumulated error and drift phenomenon in the process of mapping.Design/methodology/approachA novel light detection and ranging SLAM system is introduced based on Normal Distributions Transform and dynamic Scan Context with switch. The pose-graph optimization is used as back-end optimization module. The loop closure detection is only operated in the scenario, while the path satisfies conditions of loop-closed.FindingsThe proposed algorithm exhibits competitiveness compared with current approaches in terms of the accumulated error and drift distance. Further, supplementary to the place recognition process that is usually performed for loop detection, the authors introduce a novel dynamic constraint that takes into account the change in the direction of the robot throughout the total path trajectory between corresponding frames, which contributes to avoiding potential misidentifications and improving the efficiency.Originality/valueThe proposed system is based on Normal Distributions Transform and dynamic Scan Context with switch. The pose-graph optimization is used as back-end optimization module. The loop closure detection is only operated in the scenario, while the path satisfies condition of loop-closed.
- Research Article
32
- 10.3390/s19245419
- Dec 9, 2019
- Sensors (Basel, Switzerland)
Reducing the cumulative error in the process of simultaneous localization and mapping (SLAM) has always been a hot issue. In this paper, in order to improve the localization and mapping accuracy of ground vehicles, we proposed a novel optimized lidar odometry and mapping method using ground plane constraints and SegMatch-based loop detection. We only used the lidar point cloud to estimate the pose between consecutive frames, without any other sensors, such as Global Positioning System (GPS) and Inertial Measurement Unit (IMU). Firstly, the ground plane constraints were used to reduce matching errors. Then, based on more accurate lidar odometry obtained from lidar odometry and mapping (LOAM), SegMatch completed segmentation matching and loop detection to optimize the global pose. The neighborhood search was also used to accomplish the loop detection task in case of failure. Finally, the proposed method was evaluated and compared with the existing 3D lidar SLAM methods. Experiment results showed that the proposed method could realize low drift localization and dense 3D point cloud map construction.
- Research Article
- 10.1108/sr-07-2024-0615
- Oct 3, 2024
- Sensor Review
PurposeTo address the issues of low localization and mapping accuracy, as well as map ghosting and drift, in indoor degraded environments using light detection and ranging-simultaneous localization and mapping (LiDAR SLAM), a real-time localization and mapping system integrating filtering and graph optimization theory is proposed. By incorporating filtering algorithms, the system effectively reduces localization errors and environmental noise. In addition, leveraging graph optimization theory, it optimizes the poses and positions throughout the SLAM process, further enhancing map accuracy and consistency. The purpose of this study resolves common problems such as map ghosting and drift, thereby achieving more precise real-time localization and mapping results.Design/methodology/approachThe system consists of three main components: point cloud data preprocessing, tightly coupled inertial odometry based on filtering and backend pose graph optimization. First, point cloud data preprocessing uses the random sample consensus algorithm to segment the ground and extract ground model parameters, which are then used to construct ground constraint factors in backend optimization. Second, the frontend tightly coupled inertial odometry uses iterative error-state Kalman filtering, where the LiDAR odometry serves as observations and the inertial measurement unit preintegration results as predictions. By constructing a joint function, filtering fusion yields a more accurate LiDAR-inertial odometry. Finally, the backend incorporates graph optimization theory, introducing loop closure factors, ground constraint factors and odometry factors from frame-to-frame matching as constraints. This forms a factor graph that optimizes the map’s poses. The loop closure factor uses an improved scan-text-based loop closure detection algorithm for position recognition, reducing the rate of environmental misidentification.FindingsA SLAM system integrating filtering and graph optimization technique has been proposed, demonstrating improvements of 35.3%, 37.6% and 40.8% in localization and mapping accuracy compared to ALOAM, lightweight and ground optimized lidar odometry and mapping and LiDAR inertial odometry via smoothing and mapping, respectively. The system exhibits enhanced robustness in challenging environments.Originality/valueThis study introduces a frontend laser-inertial odometry tightly coupled filtering method and a backend graph optimization method improved by loop closure detection. This approach demonstrates superior robustness in indoor localization and mapping accuracy.
- Conference Article
61
- 10.23919/chicc.2019.8866200
- Jul 1, 2019
In this work, we tested Simultaneous localization and mapping (SLAM) about mobile robots in indoor environment, where all experiments were conducted based on the Robot Operating System (ROS). The urban search and rescue(USAR) environment was build in the ROS simulation tool Gazebo, and our car was used to test hector SLAM in Gazebo. The rplidar A1 single-line lidar was used for 2D laser scan matching data acquisition in the practical experiments and the indoor map was built by using the open source algorithms gmapping, karto SLAM, hector SLAM software package for indoor SLAM, which can get the indoor grid maps in ROS graphical tool RVIZ. The experimental results of the three open source algorithms show that the mobile robot for simultaneous localization and mapping (SLAM) is feasible, and high-precision grid maps can be constructed.
- Research Article
29
- 10.1109/lra.2022.3185385
- Oct 1, 2022
- IEEE Robotics and Automation Letters
In recent years, longwave infrared (LWIR) cameras have become potential in visual simultaneous localization and mapping (SLAM) research since the delivered thermal images can provide information beyond the visible spectrum and are robust to environment illumination. However, due to modality differences, SLAM methods designed for visible cameras cannot be directly applied to thermal data. In this paper, we propose a thermal-inertial SLAM method for all-day autonomous systems. To overcome the challenge of the thermal data association, the proposed method represents several improvements, including singular-value-decomposition-based (SVD-based) image processing and ThermalRAFT tracking methods. Based on the characteristics of the thermal images, the SVD-based image processing method can exploit the fixed noise pattern of thermal images and enhance the image quality to improve the performance of subsequent steps, including thermal feature extraction and loop detection. To achieve real-time and robust feature tracking, we develop ThermalRAFT, an efficient optical flow network with iterative optimization. Moreover, the system introduces a bag-of-words-based loop detection method to maintain global consistency in long-term operation. The experimental results demonstrate that the proposed method can provide competitive performance in indoor and outdoor environments and is robust under challenging illumination conditions.
- Research Article
1
- 10.13374/j.issn2095-9389.2020.11.09.006
- Jun 25, 2021
- 工程科学学报
The simultaneous localization and mapping (SLAM) technique is an important research direction in robotics. Although the traditional SLAM has reached a high level of real-time performance, major shortcomings still remain in its positioning accuracy and robustness. Using traditional SLAM, a geometric environment map can be constructed that can satisfy the pose estimation of robots. However, the interactive performance of this map is insufficient to support a robot in completing self-navigation and obstacle avoidance. One popular practical application of SLAM is to add semantic information by combining deep learning methods with SLAM. Systems that introduce environmental semantic information belong to semantic SLAM systems. Introduction of semantic information is of great significance for improving the positioning performance of a robot, optimizing the robustness of the robot system, and improving the scene-understanding ability of the robot. Semantic information improves recognition accuracy in complex scenes, which brings more optimization conditions for an odometer, pose estimation, and loop detection, etc. Therefore, positioning accuracy and robustness is improved. Moreover, semantic information aids in the promotion of data association from the traditional pixel level to the object level so that the perceived geometric environmental information can be assigned with semantic tags to obtain a high-level semantic map. This then aids a robot in understanding an autonomous environment and human–computer interaction. This paper summarized the latest researches that apply semantic information to SLAM. The prominent achievements of semantics combined with the traditional visual SLAM of localization and mapping were also discussed. In addition, the semantic SLAM was compared with the traditional SLAM in detail. Finally, future research topics of advanced semantic SLAM were explored. This study aims to serve as a guide for future researchers in applying semantic information to tackle localization and mapping problems.
- Research Article
220
- 10.1109/tits.2021.3063477
- Mar 18, 2021
- IEEE Transactions on Intelligent Transportation Systems
Simultaneous localization and mapping (SLAM) is a fundamental technique block in the indoor-navigation system for most autonomous vehicles and robots. SLAM aims at building a global consistent map of the environment while simultaneously determining the position and orientation of the robot in this map. Significant advances have been made in visual SLAM techniques in the past several years. However, due to the fragile performance in tracking feature points in environments that lack texture, e.g., a warehouse with blank white walls, visual SLAM can hardly provide a reliable localization. Compared with visual SLAM, LiDAR SLAM can often provide more robust localization in indoor environments by using 3D spatial information directly captured by LiDAR point clouds. Thus, LiDAR SLAM techniques are often employed in industrial applications such as automated guided vehicles (AGVs). In the past decades, a number of LiDAR SLAM methods have been proposed. However, the strength and weakness points of various LiDAR SLAMs are not clear, which may perplex the researchers and engineers. In this article, analysis and comparisons are made on different LiDAR SLAM-based indoor navigation methods, and extensive experiments are conducted to evaluate their performances in real environments. The comparative analysis and results can help researchers in academia and industry in constructing a suitable LiDAR SLAM system for indoor navigation for their own usage scenarios.
- Research Article
- 10.54254/2755-2721/35/20230367
- Feb 4, 2024
- Applied and Computational Engineering
Simultaneous Localization and Mapping (SLAM) stands as a vital technology for automatic control of robots. The significance of vision-based multi-robot collaborative SLAM technology is noteworthy in this domain, because visual SLAM uses cameras as the main sensor, which offers the benefits of easy access to environmental information and convenient installation. And the multi-robot system has the advantages of high efficiency, high fault tolerance, and high precision, so the multi-robot system can work in a complex environment and ensure its mapping efficiency, these may be a challenge for a single robot. This paper introduces the principles and common methods of visual SLAM, as well as the main algorithms of multi-robot collaborative SLAM. This paper analyzed the main problems existing in the current multi-robot collaborative visual SLAM technology: multi-robot SLAM task allocation, map fusion and back-end optimization. Then this paper listed different solutions, and analyzed their advantages and disadvantages. In addition, this paper also introduces some future research prospects of multi-robot collaborative visual SLAM technology, aiming to provide a reference direction for subsequent research in related fields.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.