Abstract

In underwater imaging, refraction changes the geometry of image formation, causing the perspective camera model to be invalid. Hence, a systematic model error occurs when computing 3D models using the perspective camera model. This paper deals with the problem of computing dense depth maps of underwater scenes with explicit incorporation of refraction of light at the underwater housing. It is assumed that extrinsic, intrinsic, and housing parameters have been calibrated for all cameras. Due to the refractive camera’s characteristics it is not possible to directly apply epipolar geometry or rectification to images because the single-view-point model and, consequently, homographies are invalid. Additionally, the projection of 3D points into the camera cannot be computed efficiently, but requires solving a 12 th degree polynomial. Therefore, the method proposed is an adapted plane sweep algorithm that is based on the idea of back-projecting rays for each pixel and view onto the 3D-hypothesis planes using the GPU. This allows to efficiently warp all images onto the plane, where they can be compared. Consequently, projections of 3D points and homographies are not utilized.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call