Abstract

In underwater synthetic aperture sonar (SAS) imagery, there is a need for accurate target recognition algorithms. Automated detection of underwater objects has many applications, not the least of which being the safe extraction of dangerous explosives. In this paper, we discuss experiments on a deep learning approach to binary classification of target and non-target SAS image tiles. Using a fused anomaly detector, the pixels in each SAS image have been narrowed down into regions of interest (ROIs), from which small target-sized tiles are extracted. This tile data set is created prior to the work done in this paper. Our objective is to carry out extensive tests on the classification accuracy of deep convolutional neural networks (CNNs) using location-based cross validation. Here we discuss the results of varying network architectures, hyperparameters, loss, and activation functions; in conjunction with an analysis of training and testing set configuration. It is also in our interest to analyze these unique network setups extensively, rather than comparing merely classification accuracy. The approach is tested on a collection of SAS imagery.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call