Abstract

Detecting protective measures (e.g., masks, goggles and protective clothing) is a momentous step in the fight against COVID-19. The detection mode of unmanned devices based on Simultaneous localization and mapping (SLAM) and fusion technology is more efficient, economical and safe than the traditional manual detection. In this paper, a tightly-coupled nonlinear optimization approach is used to augment the visual feature extraction of SLAM by the gyroscope of the IMU to obtain a high-precision visual inertial system for joint position and pose estimation. Based on the VINS-Mono frame, first, an LSD algorithm based on a conditional selection strategy is proposed to extract line features efficiently. Then, we propose recovering missing point features from line features. Moreover, we propose a strategy to recover vanishing point features from line features, and add residuals to the SLAM cost function based on optimization, which optimizes point-line features in real time to promote the tracking and matching accuracy. Second, the wavelet threshold denoising method based on the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$3\sigma $ </tex-math></inline-formula> criterion is used to carry out real-time online denoising for gyroscope to improve the output precision. Our WD-PL-VINS was measured on publicly available EuRoC datasets, TUM VI datasets and evaluated and validated in lab testing with a unmanned vehicle (UV) based on the NVIDIA Jetson-TX2 development board. The results show that our method’s APE and RPE on MH_03_easy sequences are improved by 69.28% and 97.66%, respectively, compared with VINS-Mono.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call