Abstract

Underwater visual feature matching plays an important role in the localization and autonomous motion of underwater robots. However, feature descriptors are prone to change in degraded underwater images due to light attenuation, which leads to image color distortion, blur, and low illumination. Although the recent developments in deep learning have improved the performance of feature descriptors, the degradation of underwater images and the lack of underwater datasets have hindered the development of underwater visual feature matching and localization techniques. Herein, we propose an underwater visual feature matching method based on attenuation invariance. Our approach involves two main components. First, we study the attenuation differences between images to generate feature descriptors that are invariant to attenuation, thereby improving the accuracy of underwater visual feature matching. Second, we construct two datasets: the multiple water type (MWT) dataset, which contains over 30000 underwater images, and the underwater image feature descriptor (UIFD) evaluation dataset. These datasets are developed to compensate for the lack of evaluation datasets for underwater visual feature matching. The code and datasets are available at https://github.com/Sun-Jinghao/UIFD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call