Abstract
Exploiting stronger winds at offshore farms leads to a cyclical need for maintenance due to the harsh maritime conditions. While autonomous vehicles are the prone solution for O&M procedures, sub-sea phenomena induce severe data degradation that hinders the vessel’s 3D perception. This article demonstrates a hybrid underwater imaging system that is capable of retrieving tri-dimensional information: dense and textured Photogrammetric Stereo (PS) point clouds and multiple accurate sets of points through Light Stripe Ranging (LSR), that are combined into a single dense and accurate representation. Two novel fusion algorithms are introduced in this manuscript. A Joint Masked Regression (JMR) methodology propagates sparse LSR information towards the PS point cloud, exploiting homogeneous regions around each beam projection. Regression curves then correlate depth readings from both inputs to correct the stereo-based information. On the other hand, the learning-based solution (RHEA) follows an early-fusion approach where features are conjointly learned from a coupled representation of both 3D inputs. A synthetic-to-real training scheme is employed to bypass domain-adaptation stages, enabling direct deployment in underwater contexts. Evaluation is conducted through extensive trials in simulation, controlled underwater environments, and within a real application at the ATLANTIS Coastal Testbed. Both methods estimate improved output point clouds, with RHEA achieving an average RMSE of 0.0097m - a 52.45% improvement when compared to the PS input. Performance with real underwater information proves that RHEA is robust in dealing with degraded input information; JMR is more affected by missing information, excelling when the LSR data provides a complete representation of the scenario, and struggling otherwise.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have