The laser-based 3-D reconstruction is crucial for autonomous manipulation and resource exploration in both the air and underwater scenarios, due to its accurate precision and robustness to disturbances. However, most current laser-based 3-D reconstruction sensors cannot be applied across different media without recalibration directly, e.g., air, air$\rightarrow $ glass$\rightarrow $ air, and air$\rightarrow $ glass$\rightarrow $ water, which is called killing three birds with one stone. This is because the variation of the medium density could change the sensor calibration parameters and further make the sensor suffer from a systematic geometric bias. To address these challenges, a unified laser-based 3-D reconstruction method is proposed in order that the laser-based scanner could be used across different media without recalibration, where we first explicitly model the refraction of the underwater vision systems and transform it across different media into a unified sensor reference frame. More specifically, an underwater refractive camera calibration model is designed to calibrate the orientation and position of the refractive interface, which can improve the accuracy of underwater reconstruction for the laser-based scanner, and we then present a refractive pose estimation model with a unified sensor reference frame, which can help the sensor to be applied across different scenarios directly. For the experiments, we validate the performance of our method on our underwater 3-D scanner prototype. Several reconstruction results on different objects and scenarios demonstrate the effectiveness of our proposed method and the practicality of our designed sensor.
Read full abstract