Abstract

Multi-camera systems for structure from motion (SfM) are widely deployed in many mapping applications. Existing solutions assume known rig calibration, synchronized frames among cameras, as well as overlapping field of views (FoVs). In this paper, we derive novel geometric constraints assuming minimal knowns about the multi-camera systems, to benefit low-cost and non-expert use cases where uncalibrated multi-camera systems with non-typical geometry setups present, i.e., no rig calibration, no overlapping FoVs. Assuming that these cameras are co-located and share the same motion of the platform, the proposed constraints utilize the parallelism and length proportionality of motion vectors of these co-located cameras and formulate them as translation constraints into the bundle adjustment (BA). The proposed constraints (called motion constraints) impose a first-order penalty to co-located cameras whose motion speeds and directions between frames do not match. With soft constraints, it can handle loosely synchronized frames (with an error within one second). The proposed constraints are integrated into the BA framework and experimented with different camera setups, i.e., on a group of casually co-located GoPro cameras with no rig calibration, and some with no overlapping views. Our results show that the constraints are extremely effective in improving the reconstruction and pose accuracy for ground motion images: in our self-collected open trajectories without loop closure, the proposed constraints are effective in correcting topographical errors (i.e., trajectory drifts) of the resulting models, and the dense point clouds achieve up to 11.34 m (86.12 %) of mean absolute error (MAE) improvement as compared to reference LiDAR point clouds; our results on KITTI-odometry and KITTI-360 datasets also show an improvement of up to 28.82 m (81.05 %) in terms of the root mean square error (RMSE) of absolute pose error (APE). We expect that the proposed constraints are significant not only as additional geometric constraints for image-based mobile mapping, but also will benefit the broader use of photogrammetry, since it empowers the possibility to harness the traditionally so-called low-quality stereo/multi-camera data (e.g., by non-photogrammetry citizen scientists) into improved 3D products.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call