With the increasing adoption of autonomous mobile robots in the construction industry, accurate localization and mapping in dynamic construction environments have become paramount. This is typically tackled via Simultaneous Localization and Mapping (SLAM) techniques. Primarily designed for static environments, traditional SLAM systems struggle to maintain robustness and accuracy in dynamic settings. To address this challenge, this study presents an enhanced visual SLAM system specifically tailored for dynamic construction environments. The proposed system, named vSLAM-Con, introduces an adaptive dynamic object segmentation method, utilizing an innovative AD-keyframes selection mechanism grounded on optical flow magnitude to diminish computational overhead while preserving competitive tracking accuracy. Additionally, a semantic-based feature update process is developed, leveraging scene understanding and continuous observation to augment the reliability of tracking features. This system's performance, evaluated on both an established public benchmark and a custom construction dataset, shows substantial improvements over the baseline and competitive results with the state-of-the-art algorithms. More importantly, it largely reduces the processing time compared to state-of-the-arts, demonstrating robust tracking performance even under highly dynamic conditions. The findings highlight the system's potential to contribute significantly to autonomous robotics in construction, offering more accurate navigation and interaction capabilities in complex, ever-changing environments.
Read full abstract