During the execution of autonomous tasks within sheltered space environments, unmanned vehicles demand highly precise and seamless continuous positioning capabilities. While the existing visual–inertial-based positioning methods can provide accurate poses over short distances, they are prone to error accumulation. Conversely, radio-based positioning techniques could offer absolute position information, yet they encountered difficulties in sheltered space scenarios. Usually, three or more base stations were required for localization. To address these issues, a binocular vision/inertia/ultra-wideband (UWB) combined positioning method based on factor graph optimization was proposed. This approach incorporated UWB ranging and positioning information into the visual–inertia system. Based on a sliding window, the joint nonlinear optimization of multi-source data, including IMU measurements, visual features, as well as UWB ranging and positioning information, was accomplished. Relying on visual inertial odometry, this methodology enabled autonomous positioning without the prerequisite for prior scene knowledge. When UWB base stations were available in the environment, their distance measurements or positioning information could be employed to institute global pose constraints in combination with visual–inertial odometry data. Through the joint optimization of UWB distance or positioning measurements and visual–inertial odometry data, the proposed method precisely ascertained the vehicle’s position and effectively mitigated accumulated errors. The experimental results indicated that the positioning error of the proposed method was reduced by 51.4% compared to the traditional method, thereby fulfilling the requirements for the precise autonomous navigation of unmanned vehicles in sheltered space.
Read full abstract