Abstract

Vision-based underwater autonomous systems play a significant role in marine exploration. Stereo matching is one of the most popular applications for vision-based underwater autonomous systems, which recovers the geometric information of underwater scenes via stereo disparity estimation. While stereo matching in the air has achieved great progress with the development of neural networks, it generalizes poorly to the underwater scenario due to the challenging underwater degradation. In this paper, we propose a novel Multilevel Inverse Patchmatch Network (MIPNet) to iteratively model pair-wise correlations under underwater degradation and estimate stereo disparity with both local and global refinements. Specifically, we first utilized the inverse Patchmatch module in a novel multilevel pyramid structure to recover the detailed stereo disparity from the input stereo images. Secondly, we introduced a powerful Attentional Feature Fusion module to model pair-wise correlations with global context, ensuring high-quality stereo disparity estimation for both in-air and underwater scenarios. We evaluate the proposed method on the popular real-world ETH3D benchmark, and the highly competitive performance against the popular baselines demonstrates the effectiveness of the proposed method. Moreover, with its superior performance on our real-world underwater dataset, e.g., our method outperforms the popular baseline RAFT-Stereo by 27.1%, we show the good generalization ability of our method to underwater scenarios. We finally discuss the potential challenges for underwater stereo matching via our experiments on the impact of water.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call