Abstract

The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

Highlights

  • Acoustic and optical 3D systems are widely used to collect 3D information in underwater applications, such as 3D reconstruction of submerged archaeological sites, seabed mapping, or Remotely Operated underwater Vehicle (ROV) navigation [1,2,3]

  • Acoustic systems typically give good results in long-range acquisitions and do not suffer from turbidity, but the resulting 3D data are affected by low resolution and accuracy

  • In contrast, are more suited for close-range acquisitions and allow for gathering high-resolution, accurate 3D data and textures, but the results are strongly influenced by the visibility conditions

Read more

Summary

Introduction

Acoustic and optical 3D systems are widely used to collect 3D information in underwater applications, such as 3D reconstruction of submerged archaeological sites, seabed mapping, or Remotely Operated underwater Vehicle (ROV) navigation [1,2,3]. The works presented in the literature about the integration between several types of sonar (single beam sounder, multibeam, 3D acoustic camera) and optical cameras, adopt a sensor fusion approach, which is mapping-oriented, according to the classification proposed in [19]. Experimental tests have been conducted in laboratory to validate the feasibility and the effectiveness of the proposed method and quantify the accuracy of the integration These gave us the opportunity to perform a first experimentation of the proposed 3D opto-acoustic camera, allowing for a better understanding of limitations and drawbacks of the system, and of the problems related to the alignment itself. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera

Relative Orientation of the Opto-Acoustic 3D Camera
Fixture Design
Optical and Registration counterpart by Acoustic using theData
Statistical Estimation of the Geometric Transformation
System Configuration
Stereo Optical Camera
Acoustic Camera
Laboratory Setup
The telescopic polepole for for thethe handling of of thethe opto-acoustic
Experimental setup developed atand theWASS
Experimentation
ImagePrior
Data Processing
Target acquisition objects:
Acoustic
Stereo
14. Optical
Optical
Statistical of theofTransformation
Results
Method
18. Alignment
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call