Published in last 50 years
Related Topics
Articles published on Simultaneous Localization And Mapping
- New
- Research Article
- 10.1038/s41598-025-23281-8
- Nov 7, 2025
- Scientific reports
- Khaled Oqda + 11 more
Remotely Operated Underwater Vehicles (ROUVs) are increasingly important for high-resolution surveying of narrow shipping lanes. This paper presents 3Clifs, a LiDAR- and AI-enhanced ROUV designed for near-real-time topographic mapping and navigation support in shallow, constrained waterways, demonstrated at three cliff sites in the Suez Canal. The system integrates three-dimensional Light Detection and Ranging (LiDAR) scanning, an Inertial Measurement Unit (IMU), and an onboard processor running the Robot Operating System (ROS) for Simultaneous Localization and Mapping (SLAM). To address data loss from underwater LiDAR (caused by scattering and reflection), we introduce an AI-driven optimisation module that reconstructs missing point cloud data and improves SLAM continuity. We also report propulsion and propeller design changes (propeller v05_1) that reduce flow turbulence and improve scan stability. We compare our approach to sonar-only ROUV mapping and to recent ROUV/LiDAR studies using metrics including point-cloud completeness, SLAM continuity, and navigation-path deviation. The main contributions are: (i) an integrated LiDAR, ROS and AI pipeline for underwater SLAM with missing-point recovery; (ii) a propulsion configuration optimized for LiDAR scanning stability; and (iii) a real-world Suez Canal case study demonstrating practical benefits for narrow-lane navigation.
- New
- Research Article
- 10.5194/isprs-archives-xlviii-1-w5-2025-177-2025
- Nov 5, 2025
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Junnan Zhang + 4 more
Abstract. Combining traditional Simultaneous Localization and Mapping(SLAM) with deep learning techniques leverages the strengths of machine learning in feature extraction and matching, thereby enhancing SLAM performance in UAV-based aerial RGB imagery scenarios. The core contribution of this study lies in upgrading the front-end of ORB-SLAM3 by adopting deep learning-based features (SuperPoint) and a matcher (SuperGlue), thereby replacing its original ORB feature extraction and matching modules. Experimental results demonstrate that, compared to classical handcrafted features, deep learning-based feature matching achieves higher robustness and accuracy in UAV SLAM tasks. Overall, the proposed method outperforms traditional SLAM approaches in both accuracy and robustness.
- New
- Research Article
- 10.1080/15583058.2025.2584102
- Nov 5, 2025
- International Journal of Architectural Heritage
- Ahmed Kamal Hamed Dewedar + 2 more
ABSTRACT This study evaluates the quality of point clouds generated by Simultaneous Localization and Mapping (SLAM) through empirical tests in a laboratory and a large church with complex geometries. SLAM-generated point clouds were compared with those from Terrestrial Laser Scanners (TLS) using CloudCompare software, analyzing noise parameters like C2C distance, standard deviation, and Root Mean Square Error (RMSE). Statistical analysis and filtering techniques ensured accuracy assessment. Results showed that SLAM effectively reconstructed simple geometries but had gaps in complex architectural details. Increasing scan iterations improved accuracy, with two scans providing an optimal balance. Noise was most pronounced on the ceiling and floor but was effectively reduced by the Statistical Outlier Removal (SOR) filter, though multiple filtering iterations had diminishing returns. Comparative analysis confirmed SLAM’s reliability in structured environments, though sensor upgrades are necessary for capturing intricate surfaces. Overall, while SLAM is effective for simple settings, improvements are needed for more complex reconstructions.
- New
- Research Article
- 10.5194/isprs-archives-xlviii-1-w5-2025-77-2025
- Nov 5, 2025
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Antonio Gualtiero Mainardi + 2 more
Abstract. Indoor Mobile Mapping Systems (iMMS) are based on trajectory estimation through the implementation of the SLAM (Simultaneous Localization and Mapping) algorithm. The algorithm has the limitation of requiring the environment being surveyed to have well-varied geometry. Indeed, the SLAM algorithm, by assuming a stable environment, tracks changes in the device’s position relative to a landscape of fixed elements and geometries surrounding it. iMMS can operate in outdoor environments and in mixed indoor/outdoor situations. It has been established that SLAM systems are affected by significant geometric drift effects in trajectory estimation. One commonly adopted strategy is to enforce that the surveyed trajectories are closed. Another approach involves introducing constraints in the form of control scans or control points. In particular, control vertices are typically constituted of coordinate points physically measured in the field by the operator, by placing the tip of a measuring pole on them. If in indoor applications, control vertices are generally measured with a total station, in outdoor applications they can also be measured with GNSS measurement campaign. For this reason, it is increasingly necessary to develop easy and accurate integration between iMMS and GNSS receivers to enhance the efficiency of SLAM-based mobile systems in outdoor environments, allowing high-throughput surveys. This article presents the results of such integration, providing guidelines on the most efficient operational methods for introducing these constraints. The contribution details the procedures for hardware design, electronic integration and the development of an application that applies a rigorous cartographic approach, within the compatible limits of the available technologies.
- New
- Research Article
- 10.5194/isprs-archives-xlviii-1-w5-2025-147-2025
- Nov 5, 2025
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Weitong Wu + 5 more
Abstract. LiDAR-based simultaneous localization and mapping (SLAM) plays a crucial role in applications such as search and rescue, infrastructure inspection, and underground exploration. However, conventional LiDAR-based methods often exhibit significantly reduced accuracy in degenerate environments. To address this challenge, this paper proposes a simple yet effective linear continuous-time FMCW (Frequency-Modulated Continuous Wave) LiDAR odometry method that tightly integrates Doppler constraints and point-to-plane constraints within a sliding-window-based factor graph optimization framework. The proposed method is comprehensively validated using datasets collected from a vehicle equipped with an Aeva I FMCW LiDAR in both typically degenerate scenes and highway scenarios. Experimental results demonstrate that the proposed method achieves the lowest trajectory root mean square error (RMSE) among the three sequences out of the total eight sequences, outperforming all compared methods. Notably, on Sequence 7, which spans an approximately trajectory length of 7,300 m, our method achieves a minimum trajectory RMSE of 10.19 m.
- New
- Research Article
- 10.5194/isprs-archives-xlviii-1-w5-2025-101-2025
- Nov 5, 2025
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Nazanin Padkan + 4 more
Abstract. Simultaneous Localization and Mapping (SLAM) has become a fundamental technology in various applications, including robotics, autonomous navigation, geographic information systems (GIS), and infrastructure inspection. This paper presents a new version of GuPho, a low-cost, lightweight, and portable visual SLAM-based system equipped with AI-driven capabilities for real-time mapping, object detection, and defect analysis. The system integrates stereo vision and deep learning (DL) methods to enhance spatial understanding and enable accurate real-time scene interpretation. In particular, we explore DL-based semantic segmentation, monocular depth estimation (MDE), and stereo depth estimation to improve 3D reconstruction and size measurement of cracks for infrastructure monitoring. We implement state-of-the-art neural networks, including RF-DETR and YOLO for real-time crack and windows segmentation and Depth Anything V2, Depth Pro, and Unimatch for depth estimation. Our results demonstrate the potential of GuPho as an affordable and efficient system for real-time mobile mapping and defect assessment. The real-time and AI capabilities of our in-house solution are showcased here: https://youtu.be/ATIwn4zOSFw
- New
- Research Article
- 10.5194/isprs-archives-xlviii-1-w5-2025-169-2025
- Nov 5, 2025
- The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Aziza Zhanabatyrova + 3 more
Abstract. The rapid evolution of urban landscapes necessitates efficient mapping solutions. Traditional high-accuracy semantic maps generated using expensive sensors and mobile mapping vehicles provide precise spatial data, but face challenges related to cost and scalability. Crowdsourced dashcam videos present a practical alternative for acquiring urban visual data, leveraging widely available and low-cost camera technology. Recent advances in photogrammetry and computer vision - such as Structure from Motion (SfM), Simultaneous Localization and Mapping (SLAM), semantic segmentation and object detection - enable the extraction of both 3D and semantic information from monocular images. Building upon previous research, we propose a pipeline for constructing and updating semantic 3D maps using crowdsourced low-cost dashcam footages, with a particular emphasis on automatic change detection. Our approach compares metadata related to urban landmarks (e.g., traffic signs) to identify modifications in cityscapes. We evaluate the robustness of the proposed approach with various sequences captured under challenging conditions, including rain, darkness and fog, comparing the performance of SfM-based and SLAM-based 3D reconstruction methods. Results show the effectiveness of the proposed low-cost methodology in localizing urban objects and changes, although accuracy needs to be improved with better georeferencing procedures.
- New
- Research Article
- 10.5194/isprs-annals-x-1-w2-2025-223-2025
- Nov 4, 2025
- ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Zhenghua Zhang + 2 more
Abstract. LiDAR place recognition (LPR) plays a critical role in simultaneous localization and mapping (SLAM) and autonomous driving systems. However, current LPR methods exhibit significant performance degradation under rotational shifts, noise interference, point cloud sparsity, and long-term environmental changes. This limitation stems from their reliance on fixed-length global descriptors, which lack the capacity to preserve comprehensive scene information in complex scenarios. To address these challenges, we propose LPR-Mate, a lightweight universal reranking-based optimizer that enhances the robustness of existing LPR frameworks in challenging environments. LPR-Mate processes top-k retrieval candidates from baseline LPR methods through a dual-stage pipeline: (1) A fast trigger mechanism evaluates spatial consistency between query and candidate scenes, selectively activating reranking only for low-confidence matches; (2) An independent reranking network refines candidate rankings by fusing local features, global descriptors, and spatial consistency scores through group and channel attention mechanisms. Extensive experiments on the Oxford RobotCar, NUS-Inhouse, and MulRan datasets demonstrate that LPR-Mate achieves >96% recall in localization accuracy validation and delivers a 32.34% average improvement in Recall@1 under rotational shifts, sparsity, and noise perturbations, while maintaining robustness for raw point clouds and long-term scenarios. As a plug-and-play module, LPR-Mate integrates seamlessly with diverse LPR architectures—including region-sampling and sparse-voxelization-based methods—without requiring retraining or structural modifications, ensuring computational efficiency and cross-architectural universality.
- New
- Research Article
- 10.5194/isprs-annals-x-1-w2-2025-75-2025
- Nov 3, 2025
- ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Minzhe Liu + 3 more
Abstract. LiDAR based simultaneous localization and mapping (SLAM) plays an important role for real-time localization and 3D mobile mapping of autonomous systems. However, the long-term scan-to-scan matching in the SLAM can introduce uncertainty into the position estimation. which results in a large drift. In this paper, we specifically focus on real-time estimation of the global positioning uncertainty of LiDAR SLAM so that it can enable the graceful weighting of LiDAR SLAM with other positioning systems in multi-sensor fusion localization. We introduce Lie group theory and multiple fault hypothesis solution separation (MHSS) method into a Kalman-filter based LiDAR SLAM framework. First, the scan-to-scan matching uncertainty is obtained by establishing fault hypothesis utilizing MHSS method. Then the global positioning uncertainty is propagated on Lie group based on the scan-to-scan matching uncertainty in terms of the relative position and rotation. The NCLT dataset is used to validate the proposed method. Experimental results show that: comparing with previous solutions that treat scan-to-scan matching uncertainty as a constant, the proposed method is more adaptive and robust. And the real-time global positioning uncertainty estimation can envelop the real SLAM absolute trajectory error (ATE) for the most of the time and can reflect the real changing tendency of ATE.
- New
- Research Article
- 10.3390/s25216662
- Nov 1, 2025
- Sensors
- Jiwei Qu + 4 more
The application of orchard inspection robots has become increasingly widespread. How-ever, achieving autonomous navigation in unstructured environments continues to pre-sent significant challenges. This study investigates the Simultaneous Localization and Mapping (SLAM) navigation system of an orchard inspection robot and evaluates its performance using Light Detection and Ranging (LiDAR) technology. A mobile robot that integrates tightly coupled multi-sensors is developed and implemented. The integration of LiDAR and Inertial Measurement Units (IMUs) enables the perception of environmental information. Moreover, the robot’s kinematic model is established, and coordinate transformations are performed based on the Unified Robotics Description Format (URDF). The URDF facilitates the visualization of robot features within the Robot Operating System (ROS). ROS navigation nodes are configured for path planning, where an improved A* algorithm, combined with the Dynamic Window Approach (DWA), is introduced to achieve efficient global and local path planning. The comparison of the simulation results with classical algorithms demonstrated the implemented algorithm exhibits superior search efficiency and smoothness. The robot’s navigation performance is rigorously tested, focusing on navigation accuracy and obstacle avoidance capability. Results demonstrated that, during temporary stops at waypoints, the robot exhibits an average lateral deviation of 0.163 m and a longitudinal deviation of 0.282 m from the target point. The average braking time and startup time of the robot at the four waypoints are 0.46 s and 0.64 s, respectively. In obstacle avoidance tests, optimal performance is observed with an expansion radius of 0.4 m across various obstacle sizes. The proposed combined method achieves efficient and stable global and local path planning, serving as a reference for future applications of mobile inspection robots in autonomous navigation.
- New
- Research Article
- 10.1016/j.measurement.2025.117904
- Nov 1, 2025
- Measurement
- Hang Yang + 5 more
NDF-SLAM: LiDAR SLAM based on neural distance field for registration and loop closure detection
- New
- Research Article
- 10.1109/lra.2025.3609204
- Nov 1, 2025
- IEEE Robotics and Automation Letters
- Pierre-Yves Lajoie + 3 more
3D Foundation Model-Based Loop Closing for Decentralized Collaborative SLAM
- New
- Research Article
- 10.3390/app152111673
- Oct 31, 2025
- Applied Sciences
- Jinxing Niu + 4 more
To address issues such as low visual SLAM (Simultaneous Localization and Mapping) positioning accuracy and poor map construction robustness caused by light variations, foliage occlusion, and texture repetition in unstructured orchard environments, this paper proposes an orchard robot navigation method based on an improved RTAB-Map algorithm. By integrating ORB-SLAM3 as the visual odometry module within the RTAB-Map framework, the system achieves significantly improved accuracy and stability in pose estimation. During the post-processing stage of map generation, a height filtering strategy is proposed to effectively filter out low-hanging branch point clouds, thereby generating raster maps that better meet navigation requirements. The navigation layer integrates the ROS (Robot Operating System) Navigation framework, employing the A* algorithm for global path planning while incorporating the TEB (Timed Elastic Band) algorithm to achieve real-time local obstacle avoidance and dynamic adjustment. Experimental results demonstrate that the improved system exhibits higher mapping consistency in simulated orchard environments, with the odometry’s absolute trajectory error reduced by approximately 45.5%. The robot can reliably plan paths and traverse areas with low-hanging branches. This study provides a solution for autonomous navigation in agricultural settings that balances precision with practicality.
- New
- Research Article
- 10.1088/2631-8695/ae15d3
- Oct 30, 2025
- Engineering Research Express
- Shihao Gu + 5 more
Abstract In dynamic environments, moving objects introduce unstable features that significantly degrade the accuracy of simultaneous localization and mapping (SLAM) systems. To address this issue, we propose Neural-KF, a robust visual SLAM framework that integrates three key modules: (1) a modified SuperPoint network with multi-level feature fusion for reliable static keypoint extraction, (2) a YOLOv8-based dynamic object detector, and (3) a Kalman-consistent state estimation mechanism that predicts object motion trajectories to enhance temporal consistency. By associating predicted and detected bounding boxes via the Hungarian algorithm, Neural-KF achieves accurate suppression of dynamic points while preserving sufficient static features for pose estimation. Experimental evaluations on public datasets, including KITTI and EuRoC, demonstrate that Neural-KF improves absolute trajectory error by up to 28% compared to VINS-Fusion and achieves competitive accuracy against advanced dynamic SLAM systems such as DynaSLAM. Furthermore, the system maintains real-time performance (>30 FPS) with a balanced trade-off between accuracy and computational cost. These results highlight the effectiveness of Neural-KF in achieving robust and efficient visual odometry under challenging dynamic conditions.
- New
- Research Article
- 10.54939/1859-1043.j.mst.iite.2025.27-34
- Oct 30, 2025
- Journal of Military Science and Technology
- Bach Nhat Hoang + 3 more
The operation of remotely operated underwater vehicles in underwater environments always faces the challenge of lacking GPS signals, leading to the accumulation of positioning errors over time. This instability in motion significantly reduces the efficiency and safety of practical operations such as infrastructure inspection, seabed surveying, and search and rescue missions. This paper presents a Simultaneous Localization and Mapping (SLAM) method based on enhanced sonar data for the operational capability of underwater vehicles. The proposed algorithm fuses data from sonar with an inertial measurement unit (IMU) within an Iterated Extended Kalman Filter (IEKF) framework to optimize the vehicle's trajectory and correct for accumulated errors. By processing sonar data to extract features and then generating loop closure constraints to combine with motion estimates from odometry, the proposed model optimizes the entire trajectory of the vehicle and effectively corrects accumulated errors. The results obtained are highly accurate pose estimates and a consistent map of the operational environment throughout the voyage. The successful implementation of this algorithm demonstrates great potential in enhancing the autonomy, reliability, and operational efficiency of underwater vehicles in practical applications.
- New
- Research Article
- 10.5194/isprs-annals-x-2-w2-2025-125-2025
- Oct 29, 2025
- ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Matias Mäki-Leppilampi + 2 more
Abstract. This paper presents an unmanned surface vehicle (USV) equipped with a mapping system designed to map boreal freshwater environments. The proposed system fuses satellite navigation, inertial measurements, and lidar data to provide accurate and precise three-dimensional (3D) point clouds from the environment around the USV’s path. In order to achieve the required accuracy, we present several calibration methods used including a novel cost function for optimizing a rotation between lidar and inertial frames based on accelerometer measurements and point cloud registration. In the proposed positioning method, a post-processed high-end satellite navigation and inertial fusion trajectory is used as an initial guess of the USV’s pose and for motion compensating lidar data. Pose graph based simultaneous localization and mapping (SLAM) algorithm is used to further refine the map and trajectory using normal distributions transform (NDT) distribution to distribution variant to compute lidar odometry and loop-closures offline after data collection. A method for rating loop-closures is adopted to select scan registration results to add into the pose graph. A factor graph is built using lidar odometry, detected loop-closures, and fused satellite navigation and inertial solution to optimize and solve the optimal trajectory. The conducted experiment demonstrates that the proposed graph-SLAM method significantly improves the overall consistency of the resulting 3D point cloud and the absolute trajectory error (ATE) of the optimized trajectory.
- New
- Research Article
- 10.5194/isprs-annals-x-2-w2-2025-149-2025
- Oct 29, 2025
- ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences
- Seyed Hojat Mirtajadini + 2 more
Abstract. Visual-inertial navigation has become a cornerstone for deploying robots in diverse environments. Despite significant progress, current approaches may easily fail to deliver reliable and robust navigation for industrial applications. Therefore, evaluating these methods using various datasets under challenging operational conditions is essential to ensure safe integration into robotic platforms. As such, this paper aims to enrich the availability of navigation datasets by introducing SMARTNav, which includes raw data obtained from stereo cameras and IMU sensors mounted on both ground and aerial robots. These robots were deployed in various operational scenarios across different environments, such as greenhouses, urban streets, indoor spaces, and near-building areas. The data includes challenges of navigating in GPS-denied areas, repetitive structures, featureless environments, and adverse lighting conditions. In order to provide corresponding ground-truth for each sequence, different techniques were deployed, such as Motion Capture System, Real Time Kinematics (RTK), and dense LiDAR-based Simultaneous Localization and Mapping (SLAM). Consequently, the resulting dataset can be used to address and validate key issues in vision-based state estimation, localization, and mapping for industrial applications. The SMARTNav dataset is accessible at: https://saxionmechatronics.github.io/smartnav-dataset/.
- New
- Research Article
- 10.3390/agriculture15212248
- Oct 28, 2025
- Agriculture
- Yang Yu + 4 more
To address the issues of signal loss and insufficient accuracy of traditional GNSS (Global Navigation Satellite System) navigation in agricultural machinery sheds and farm access road environments, this paper proposes a high-precision mapping method for such complex environments and a real-time localization system for agricultural vehicles. First, an autonomous navigation system was developed by integrating multi-sensor data from LiDAR (Light Laser Detection and Ranging), GNSS, and IMU (Inertial Measurement Unit), with functional modules for mapping, localization, planning, and control implemented within the ROS (Robot Operating System) framework. Second, an improved LeGO-LOAM algorithm is introduced for constructing maps of machinery sheds and farm access roads. The mapping accuracy is enhanced through reflectivity filtering, ground constraint optimization, and ScanContext-based loop closure detection. Finally, a localization method combining NDT (Normal Distribution Transform), IMU, and a UKF (Unscented Kalman Filter) is proposed for tracked grain transport vehicles. The UKF and IMU measurements are used to predict the vehicle state, while the NDT algorithm provides pose estimates for state update, yielding a fused and more accurate pose estimate. Experimental results demonstrate that the proposed mapping method reduces APE (absolute pose error) by 79.99% and 49.04% in the machinery sheds and farm access roads environments, respectively, indicating a significant improvement over conventional methods. The real-time localization module achieves an average processing time of 26.49 ms with an average error of 3.97 cm, enhancing localization accuracy without compromising output frequency. This study provides technical support for fully autonomous operation of agricultural machinery.
- New
- Research Article
- 10.1017/s0263574725102580
- Oct 27, 2025
- Robotica
- Tao Yang + 2 more
Abstract In firefighting missions, human firefighters are often exposed to high-risk environments such as intense heat and limited visibility. To address this, firefighting robots can serve as valuable agents for autonomous navigation and flame perception. This paper proposes a novel visual Simultaneous Localization and Mapping (SLAM) framework, Fire SLAM , tailored for firefighting scenarios. The system integrates a flame detection and tracking thread-based on the YOLOv8n network and Kalman filtering-to achieve real-time flame detection, tracking, and 3D localization. By leveraging the detection results, dynamic flame regions are excluded from the SLAM front-end, allowing static features to be used for robust pose estimation and loop closure. To validate the proposed system, multiple datasets were collected from real-world and simulated fire environments. Experimental results demonstrate that Fire SLAM improves localization accuracy and robustness in fire scenes with flame disturbances, showing promise for autonomous firefighting robot deployment.
- New
- Research Article
- 10.1109/tpami.2025.3626275
- Oct 27, 2025
- IEEE transactions on pattern analysis and machine intelligence
- Yining Ding + 3 more
We propose a method which, given a sequence of stereo foggy images, estimates the parameters of a fog model and updates them dynamically. In contrast with previous approaches, which estimate the parameters sequentially and thus are prone to error propagation, our algorithm estimates all the parameters simultaneously by solving a novel optimisation problem. By assuming that fog is only locally homogeneous, our method effectively handles real-world fog, which is often globally inhomogeneous. The proposed algorithm can be easily used as an add-on module in existing visual Simultaneous Localisation and Mapping (SLAM) or odometry systems in the presence of fog. In order to assess our method, we also created a new dataset, the Stereo Driving In Real Fog (SDIRF), consisting of high-quality, consecutive stereo frames of real, foggy road scenes under a variety of visibility conditions, totalling over 40 minutes and 34k frames. As a first-of-its-kind, SDIRF contains the camera's photometric parameters calibrated in a lab environment, which is a prerequisite for correctly applying the atmospheric scattering model to foggy images. The dataset also includes the counterpart clear data of the same routes recorded in overcast weather, which is useful for companion work in image defogging and depth reconstruction. We conducted extensive experiments using both synthetic foggy data and real foggy sequences from SDIRF to demonstrate the superiority of the proposed algorithm over prior methods. Our method not only produces the most accurate estimates on synthetic data, but also adapts better to real fog. We make our code and SDIRF publicly available11https://github.com/SenseRoboticsLab/estimating-fog-parameters to the community with the aim of advancing the research on visual perception in fog.