Abstract
Advancements in deep neural networks for computer-vision tasks have the potential to improve automatic target recognition (ATR) in synthetic aperture sonar (SAS) imagery. Many of the recent improvements in computer vision have been made possible by densely labeled datasets such as ImageNet. In contrast, SAS datasets typically contain far fewer labeled samples than unlabeled samples—often by several orders of magnitude. Yet unlabeled SAS data contain information useful for both generative and discriminative tasks. Here results are shown from semi-supervised ladder networks for learning to classify and localize in SAS images from very few labels. We perform end-to-end training concurrently with unlabeled and labeled samples and find that the unsupervised-learning task improves classification accuracy. Ladder networks are employed to adapt fully convolutional networks used for pixelwise prediction based on supervised training to semi-supervised semantic segmentation and target localization by pixel-level classification of whole SAS images. Using this approach, we find improved segmentation and better generalization in new SAS environments compared to purely supervised learning. We hypothesize that utilizing large unsupervised data in conjunction with the supervised classification task helps the network generalize by learning more invariant hierarchical features. [Work supported by the Office of Naval Research.]
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.