Cooperative swarms of robots equipped with cameras are robust against failures and can explore Global Navigation Satellite System-denied environments efficiently. Applying Visual Simultaneous Localization and Mapping (VSLAM) techniques, vehicles can estimate their trajectories and simultaneously reconstruct the map of the environment using visual cues. Due to constraints on payload size, weight, and costs, many Visual Simultaneous Localization and Mapping applications must be based on a single camera. The associated monocular estimation of the trajectory and map is ambiguous by a scale factor. This work shows that by exploiting sparse range measurements between a pair of dynamic rovers in planar motion, the correct scale factors of both cameras and the relative position, as well as the relative attitude between the rovers, can be estimated. Neither images nor feature vectors are required to be transmitted over the communication channel for the proposed method, which is a significant advantage in practice.
Read full abstract