Abstract

Visual maps of the seafloor should ideally provide the ability to measure individual features of interest in real units. Two-dimensional photomosaics cannot provide this capability without making assumptions that often fail over 3-D terrain, and are generally used for visualization, but not for measurement. Full 3-D structure can be recovered using stereo vision, structure from motion (SFM), or simultaneous localization and mapping (SLAM); of these techniques, only stereo vision is suitable for fully dense (a distance measurement for each imaged pixel) 3-D structure in the absence of significant frame-to-frame overlap. Stereo vision is notoriously dependent on camera calibration, however, which is difficult to compute and maintain in the field. The fewer dependencies an AUV mapping system has on camera calibration, the more reliably it will be able to produce useful maps of the seafloor. We present a system for recovering the 7-DOF relationship between the AUV's estimation frame and the camera rig (Euclidean offsets plus scale), which reconciles the robot's odometry-based pose estimate with stereo visual odome-try. The combination of robust frame-to-frame visual feature matching, subpixel stereo correspondence estimation, and high-accuracy on-board vehicle navigation sensors enables us to self-calibrate the extrinsic parameters of the stereo rig including scale, and produce metric maps using only vehicle navigation and the computed camera calibration. Using data acquired in the Bering Sea by the SeaBED AUV in August, 2009, our initial results indicate that accumulated navigation drift is less than 0.5% of distance travelled, suggesting that a visual SLAM system for correcting drift and building a final map would only require the robot's path to cross itself every few hundred meters. In addition to providing a large-scale metric 3-D map, the corrected stereo calibration enables scientists to measure the sizes of imaged objects without additional hardware such as laser points or acoustic ranging systems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call