Abstract

Underwater man-made object recognition in optical images plays important roles in both image processing and oceanic engineering. Deep learning methods have received impressive performances in many recognition tasks in in-air images, however, they will be limited in the proposed task since it is tough to collect and annotate sufficient data to train the networks. Considered that large-scale in-air images of man-made objects are much easier to acquire in the applications, one can train a network on in-air images and directly applying it on underwater images. However, the distribution mismatch between in-air and underwater images will lead to a significant performance drop. In this work, we propose an end-to-end weakly-supervised framework to recognize underwater man-made objects with large-scale labeled in-air images and sparsely labeled underwater images. And a novel two-level feature alignment approach, is introduced to a typical deep domain adaptation network, in order to tackle the domain shift between data generated from two modalities. We test our methods on our newly simulated datasets containing two image domains, and achieve an improvement of approximately 10 to 20 % points in average accuracy compared to the best-performing baselines.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.