Abstract

Autonomous synthetic aperture sonar (SAS) imaging by unmanned underwater vehicles (UUVs) provides an abundance of high-resolution acoustic imagery useful for studying the seafloor and identifying targets of interest (e.g., unexploded ordnance or mines). Unaided manual processing is cumbersome as the amount of data gathered by UUVs can be enormous. Computer-vision and machine-learning techniques have helped to automate classification and object-recognition tasks, but often rely on hand-built features that fail to generalize. Deep-learning algorithms facilitated by emergence of graphics-processing unit (GPU) hardware and highly optimized neural-network implementations have recently enabled great improvements in computer vision. Autoencoders allow for deep unsupervised learning of features based on a reconstruction objective. Here, we present unsupervised feature learning applied to seafloor classification of SAS images. Deep architectures are also capable generative models. We illustrate this with generative networks that are capable of generating realistic SAS images of different seafloor bottom types. Deep models allow us to construct algorithms that learn hierarchical and higher-order SAS features, which promise to improve automatic target recognition (ATR) and aid operators in processing the large data volumes generated by UUV based SAS imaging. [Work supported by the Office of Naval Research.]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call