Abstract

Classification of objects in synthetic aperture sonar (SAS) images is a vital task in underwater automatic target recognition (ATR) and deep learning has proven highly successful in this task. Typical deep learning systems used for processing of SAS images are inspired by results in the domain of optical images. However, unlike the common optical images, SAS images can be supplemented with additional meta-information such as the imaging geometry, spatial resolution and signal-to-noise ratio. This paper explores techniques to exploit imaging geometry as an additional source of information for improving the classification performance of SAS images with deep neural networks.One intuitive way of utilizing the imaging geometry parameters, mainly the ground range and the sensor altitude, is to use them as additional inputs to the system. We have conducted experiments to study this and the paper presents the results of these experiments.An alternative approach is to consider imaging geometry as a constraint on the space of the input images and hence on the search space of the training problem. We consider different ways to impose this constraint and report the results of the experiments carried out to investigate the merits of the approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call