Abstract

Visual-inertial navigation systems (VINS) have been extensively studied in the past decades to provide positioning services for autonomous systems, such as autonomous driving vehicles (ADV) and unmanned aerial vehicles (UAV). Decent performance can be obtained by VINS in indoor scenarios with stable illumination and texture information. Unfortunately, applying the VINS in dynamic urban areas is still a challenging problem, due to the excessive dynamic objects which can significantly degrade the performance of VINS. Detecting and removing the features inside an image using the deep neural network (DNN) that belongs to unexpected objects, such as moving vehicles and pedestrians, is a straightforward idea to mitigate the impacts of dynamic objects on VINS. However, excessive exclusion of features can significantly distort the geometry distribution of visual features. Even worse, excessive removal can cause the unobservability of the system states. Instead of directly excluding the features that possibly belong to dynamic objects, this paper proposes to remodel the uncertainty of dynamic features. Then both the healthy and dynamic features are applied in the VINS. The experiment in a typical urban canyon is conducted to validate the performance of the proposed method. The result shows that the proposed method can effectively mitigate the impacts of the dynamic objects and improved accuracy is obtained.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.